Jan 17 00:27:36.106896 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:27:36.106947 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:27:36.106967 kernel: BIOS-provided physical RAM map: Jan 17 00:27:36.106980 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 17 00:27:36.106993 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 17 00:27:36.107006 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 17 00:27:36.107023 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 17 00:27:36.107043 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 17 00:27:36.107056 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 17 00:27:36.107071 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 17 00:27:36.107084 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 17 00:27:36.107098 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 17 00:27:36.107112 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 17 00:27:36.107127 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 17 00:27:36.107151 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 17 00:27:36.107168 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 17 00:27:36.107184 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 17 00:27:36.107201 kernel: NX (Execute Disable) protection: active Jan 17 00:27:36.107227 kernel: APIC: Static calls initialized Jan 17 00:27:36.107244 kernel: efi: EFI v2.7 by EDK II Jan 17 00:27:36.107261 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Jan 17 00:27:36.107278 kernel: SMBIOS 2.4 present. Jan 17 00:27:36.107295 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Jan 17 00:27:36.107312 kernel: Hypervisor detected: KVM Jan 17 00:27:36.107333 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:27:36.107349 kernel: kvm-clock: using sched offset of 13026890038 cycles Jan 17 00:27:36.107367 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:27:36.107384 kernel: tsc: Detected 2299.998 MHz processor Jan 17 00:27:36.107402 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:27:36.107419 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:27:36.107437 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 17 00:27:36.107454 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 17 00:27:36.107470 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:27:36.107491 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 17 00:27:36.107508 kernel: Using GB pages for direct mapping Jan 17 00:27:36.107525 kernel: Secure boot disabled Jan 17 00:27:36.107541 kernel: ACPI: Early table checksum verification disabled Jan 17 00:27:36.107559 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 17 00:27:36.107576 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 17 00:27:36.107594 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 17 00:27:36.107618 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 17 00:27:36.107640 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 17 00:27:36.107659 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Jan 17 00:27:36.107678 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 17 00:27:36.107696 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 17 00:27:36.107728 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 17 00:27:36.107755 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 17 00:27:36.107775 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 17 00:27:36.107789 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 17 00:27:36.107803 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 17 00:27:36.107818 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 17 00:27:36.107833 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 17 00:27:36.107847 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 17 00:27:36.107862 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 17 00:27:36.107877 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 17 00:27:36.107891 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 17 00:27:36.107911 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 17 00:27:36.107926 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:27:36.107943 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:27:36.107959 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 00:27:36.107976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 17 00:27:36.107993 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 17 00:27:36.108009 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 17 00:27:36.108026 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 17 00:27:36.108042 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 17 00:27:36.108064 kernel: Zone ranges: Jan 17 00:27:36.108081 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:27:36.108099 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 00:27:36.108114 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 17 00:27:36.108131 kernel: Movable zone start for each node Jan 17 00:27:36.108168 kernel: Early memory node ranges Jan 17 00:27:36.108186 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 17 00:27:36.108203 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 17 00:27:36.108229 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 17 00:27:36.108252 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 17 00:27:36.108270 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 17 00:27:36.108287 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 17 00:27:36.108304 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:27:36.108321 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 17 00:27:36.108339 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 17 00:27:36.108355 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 00:27:36.108373 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 17 00:27:36.108390 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 00:27:36.108411 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:27:36.108429 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:27:36.108446 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:27:36.108464 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:27:36.108481 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:27:36.108498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:27:36.108516 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:27:36.108533 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:27:36.108550 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:27:36.108571 kernel: Booting paravirtualized kernel on KVM Jan 17 00:27:36.108589 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:27:36.108607 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:27:36.108624 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:27:36.108641 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:27:36.108658 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:27:36.108675 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:27:36.108692 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:27:36.108712 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:27:36.108757 kernel: random: crng init done Jan 17 00:27:36.108774 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 00:27:36.108792 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:27:36.108809 kernel: Fallback order for Node 0: 0 Jan 17 00:27:36.108827 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 17 00:27:36.108844 kernel: Policy zone: Normal Jan 17 00:27:36.108862 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:27:36.108879 kernel: software IO TLB: area num 2. Jan 17 00:27:36.108896 kernel: Memory: 7513176K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 347148K reserved, 0K cma-reserved) Jan 17 00:27:36.108918 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:27:36.108935 kernel: Kernel/User page tables isolation: enabled Jan 17 00:27:36.108951 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:27:36.108968 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:27:36.108985 kernel: Dynamic Preempt: voluntary Jan 17 00:27:36.109003 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:27:36.109021 kernel: rcu: RCU event tracing is enabled. Jan 17 00:27:36.109039 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:27:36.109075 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:27:36.109093 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:27:36.109112 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:27:36.109134 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:27:36.109152 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:27:36.109171 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:27:36.109189 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:27:36.109215 kernel: Console: colour dummy device 80x25 Jan 17 00:27:36.109237 kernel: printk: console [ttyS0] enabled Jan 17 00:27:36.109256 kernel: ACPI: Core revision 20230628 Jan 17 00:27:36.109274 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:27:36.109292 kernel: x2apic enabled Jan 17 00:27:36.109311 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:27:36.109329 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 17 00:27:36.109348 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 00:27:36.109366 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 17 00:27:36.109385 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 17 00:27:36.109407 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 17 00:27:36.109424 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:27:36.109443 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 00:27:36.109463 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 00:27:36.109482 kernel: Spectre V2 : Mitigation: IBRS Jan 17 00:27:36.109500 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:27:36.109519 kernel: RETBleed: Mitigation: IBRS Jan 17 00:27:36.109538 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:27:36.109558 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 17 00:27:36.109582 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:27:36.109601 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 00:27:36.109621 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:27:36.109640 kernel: active return thunk: its_return_thunk Jan 17 00:27:36.109660 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:27:36.109680 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:27:36.109699 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:27:36.109741 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:27:36.109761 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:27:36.109786 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 00:27:36.109805 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:27:36.109825 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:27:36.109844 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:27:36.109863 kernel: landlock: Up and running. Jan 17 00:27:36.109883 kernel: SELinux: Initializing. Jan 17 00:27:36.109902 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:27:36.109932 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:27:36.109952 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 17 00:27:36.109975 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:27:36.109995 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:27:36.110022 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:27:36.110041 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 17 00:27:36.110061 kernel: signal: max sigframe size: 1776 Jan 17 00:27:36.110086 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:27:36.110106 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:27:36.110125 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:27:36.110145 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:27:36.110175 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:27:36.110195 kernel: .... node #0, CPUs: #1 Jan 17 00:27:36.110221 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 00:27:36.110241 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:27:36.110262 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:27:36.110280 kernel: smpboot: Max logical packages: 1 Jan 17 00:27:36.110301 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 17 00:27:36.110320 kernel: devtmpfs: initialized Jan 17 00:27:36.110344 kernel: x86/mm: Memory block size: 128MB Jan 17 00:27:36.110364 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 17 00:27:36.110384 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:27:36.110403 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:27:36.110423 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:27:36.110443 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:27:36.110462 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:27:36.110482 kernel: audit: type=2000 audit(1768609654.845:1): state=initialized audit_enabled=0 res=1 Jan 17 00:27:36.110501 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:27:36.110525 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:27:36.110544 kernel: cpuidle: using governor menu Jan 17 00:27:36.110563 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:27:36.110588 kernel: dca service started, version 1.12.1 Jan 17 00:27:36.110608 kernel: PCI: Using configuration type 1 for base access Jan 17 00:27:36.110627 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:27:36.110647 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:27:36.110666 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:27:36.110686 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:27:36.110710 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:27:36.110763 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:27:36.110783 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:27:36.110802 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:27:36.110822 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 00:27:36.110841 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:27:36.110861 kernel: ACPI: Interpreter enabled Jan 17 00:27:36.110881 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:27:36.110900 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:27:36.110925 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:27:36.110945 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 00:27:36.110963 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 00:27:36.110983 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:27:36.111266 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:27:36.111473 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:27:36.111659 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:27:36.111688 kernel: PCI host bridge to bus 0000:00 Jan 17 00:27:36.111916 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:27:36.112096 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:27:36.112280 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:27:36.112453 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 17 00:27:36.112643 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:27:36.112890 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:27:36.113105 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 17 00:27:36.113321 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 00:27:36.113515 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 00:27:36.113757 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 17 00:27:36.113960 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 17 00:27:36.114154 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 17 00:27:36.114371 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:27:36.114567 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 17 00:27:36.114794 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 17 00:27:36.114999 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:27:36.115192 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 17 00:27:36.115389 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 17 00:27:36.115414 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:27:36.115441 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:27:36.115461 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:27:36.115481 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:27:36.115501 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:27:36.115520 kernel: iommu: Default domain type: Translated Jan 17 00:27:36.115540 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:27:36.115559 kernel: efivars: Registered efivars operations Jan 17 00:27:36.115579 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:27:36.115599 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:27:36.115623 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 17 00:27:36.115642 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 17 00:27:36.115671 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 17 00:27:36.115689 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 17 00:27:36.115729 kernel: vgaarb: loaded Jan 17 00:27:36.115760 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:27:36.115780 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:27:36.115799 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:27:36.115819 kernel: pnp: PnP ACPI init Jan 17 00:27:36.115844 kernel: pnp: PnP ACPI: found 7 devices Jan 17 00:27:36.115864 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:27:36.115884 kernel: NET: Registered PF_INET protocol family Jan 17 00:27:36.115905 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:27:36.115924 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 00:27:36.115944 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:27:36.115963 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:27:36.115983 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 00:27:36.116003 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 00:27:36.116026 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:27:36.116046 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:27:36.116065 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:27:36.116085 kernel: NET: Registered PF_XDP protocol family Jan 17 00:27:36.116290 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:27:36.116469 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:27:36.116641 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:27:36.116846 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 17 00:27:36.117050 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:27:36.117075 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:27:36.117095 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 00:27:36.117114 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 17 00:27:36.117133 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:27:36.117151 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 00:27:36.117168 kernel: clocksource: Switched to clocksource tsc Jan 17 00:27:36.117184 kernel: Initialise system trusted keyrings Jan 17 00:27:36.117206 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 00:27:36.117234 kernel: Key type asymmetric registered Jan 17 00:27:36.117250 kernel: Asymmetric key parser 'x509' registered Jan 17 00:27:36.117268 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:27:36.117285 kernel: io scheduler mq-deadline registered Jan 17 00:27:36.117302 kernel: io scheduler kyber registered Jan 17 00:27:36.117320 kernel: io scheduler bfq registered Jan 17 00:27:36.117337 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:27:36.117356 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 00:27:36.117559 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 17 00:27:36.117581 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 17 00:27:36.118219 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 17 00:27:36.118253 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 00:27:36.118448 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 17 00:27:36.118473 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:27:36.118493 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:27:36.118512 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 00:27:36.118531 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 17 00:27:36.118556 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 17 00:27:36.118825 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 17 00:27:36.118855 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:27:36.118875 kernel: i8042: Warning: Keylock active Jan 17 00:27:36.118894 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:27:36.118914 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:27:36.119111 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 00:27:36.119306 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 00:27:36.119482 kernel: rtc_cmos 00:00: setting system clock to 2026-01-17T00:27:35 UTC (1768609655) Jan 17 00:27:36.119656 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 00:27:36.119681 kernel: intel_pstate: CPU model not supported Jan 17 00:27:36.119701 kernel: pstore: Using crash dump compression: deflate Jan 17 00:27:36.120331 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:27:36.120356 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:27:36.120374 kernel: Segment Routing with IPv6 Jan 17 00:27:36.120393 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:27:36.120418 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:27:36.120437 kernel: Key type dns_resolver registered Jan 17 00:27:36.120455 kernel: IPI shorthand broadcast: enabled Jan 17 00:27:36.120475 kernel: sched_clock: Marking stable (873004135, 144264769)->(1064047094, -46778190) Jan 17 00:27:36.120496 kernel: registered taskstats version 1 Jan 17 00:27:36.120516 kernel: Loading compiled-in X.509 certificates Jan 17 00:27:36.120533 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:27:36.120551 kernel: Key type .fscrypt registered Jan 17 00:27:36.120570 kernel: Key type fscrypt-provisioning registered Jan 17 00:27:36.120593 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:27:36.120612 kernel: ima: No architecture policies found Jan 17 00:27:36.120631 kernel: clk: Disabling unused clocks Jan 17 00:27:36.120651 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:27:36.120671 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:27:36.120691 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:27:36.120710 kernel: Run /init as init process Jan 17 00:27:36.120770 kernel: with arguments: Jan 17 00:27:36.120788 kernel: /init Jan 17 00:27:36.120811 kernel: with environment: Jan 17 00:27:36.120827 kernel: HOME=/ Jan 17 00:27:36.120877 kernel: TERM=linux Jan 17 00:27:36.120894 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:27:36.120915 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:27:36.120939 systemd[1]: Detected virtualization google. Jan 17 00:27:36.120958 systemd[1]: Detected architecture x86-64. Jan 17 00:27:36.120983 systemd[1]: Running in initrd. Jan 17 00:27:36.121001 systemd[1]: No hostname configured, using default hostname. Jan 17 00:27:36.121022 systemd[1]: Hostname set to . Jan 17 00:27:36.121041 systemd[1]: Initializing machine ID from random generator. Jan 17 00:27:36.121060 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:27:36.121078 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:27:36.121096 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:27:36.121116 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:27:36.121139 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:27:36.121162 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:27:36.121184 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:27:36.121203 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:27:36.121759 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:27:36.121793 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:27:36.121814 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:27:36.121840 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:27:36.121859 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:27:36.121899 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:27:36.121924 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:27:36.121944 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:27:36.121964 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:27:36.121989 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:27:36.122008 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:27:36.122030 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:27:36.122051 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:27:36.122072 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:27:36.122092 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:27:36.122110 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:27:36.122129 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:27:36.122147 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:27:36.122170 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:27:36.122189 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:27:36.122218 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:27:36.122237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:27:36.122255 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:27:36.122307 systemd-journald[184]: Collecting audit messages is disabled. Jan 17 00:27:36.122360 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:27:36.122380 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:27:36.122403 systemd-journald[184]: Journal started Jan 17 00:27:36.122450 systemd-journald[184]: Runtime Journal (/run/log/journal/b5f91ae213834b20bea6cfc49db0a2b0) is 8.0M, max 148.7M, 140.7M free. Jan 17 00:27:36.137744 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:27:36.141651 systemd-modules-load[185]: Inserted module 'overlay' Jan 17 00:27:36.146871 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:27:36.149253 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:27:36.180753 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:27:36.182167 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:27:36.183984 kernel: Bridge firewalling registered Jan 17 00:27:36.184179 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 17 00:27:36.185542 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:27:36.186305 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:27:36.186955 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:27:36.190304 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:27:36.200858 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:27:36.221256 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:27:36.227434 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:27:36.236289 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:27:36.241358 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:27:36.251991 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:27:36.260957 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:27:36.281366 dracut-cmdline[216]: dracut-dracut-053 Jan 17 00:27:36.286137 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:27:36.317518 systemd-resolved[218]: Positive Trust Anchors: Jan 17 00:27:36.318172 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:27:36.318380 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:27:36.324972 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 17 00:27:36.328565 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:27:36.342503 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:27:36.395769 kernel: SCSI subsystem initialized Jan 17 00:27:36.407765 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:27:36.420759 kernel: iscsi: registered transport (tcp) Jan 17 00:27:36.445757 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:27:36.445839 kernel: QLogic iSCSI HBA Driver Jan 17 00:27:36.499998 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:27:36.506974 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:27:36.549779 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:27:36.549870 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:27:36.551773 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:27:36.596758 kernel: raid6: avx2x4 gen() 17985 MB/s Jan 17 00:27:36.613754 kernel: raid6: avx2x2 gen() 18017 MB/s Jan 17 00:27:36.631145 kernel: raid6: avx2x1 gen() 14058 MB/s Jan 17 00:27:36.631187 kernel: raid6: using algorithm avx2x2 gen() 18017 MB/s Jan 17 00:27:36.649147 kernel: raid6: .... xor() 17578 MB/s, rmw enabled Jan 17 00:27:36.649207 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:27:36.672764 kernel: xor: automatically using best checksumming function avx Jan 17 00:27:36.846773 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:27:36.861372 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:27:36.871974 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:27:36.887997 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 17 00:27:36.894959 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:27:36.908011 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:27:36.939141 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jan 17 00:27:36.979708 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:27:36.984046 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:27:37.080486 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:27:37.094967 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:27:37.132064 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:27:37.136226 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:27:37.139405 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:27:37.147839 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:27:37.160961 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:27:37.191747 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:27:37.191863 kernel: blk-mq: reduced tag depth to 10240 Jan 17 00:27:37.201744 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 17 00:27:37.228338 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:27:37.248903 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:27:37.272254 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:27:37.395926 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:27:37.395975 kernel: AES CTR mode by8 optimization enabled Jan 17 00:27:37.396008 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Jan 17 00:27:37.396360 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 17 00:27:37.396609 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 17 00:27:37.396872 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 17 00:27:37.397125 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 00:27:37.397356 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:27:37.272450 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:27:37.452791 kernel: GPT:17805311 != 33554431 Jan 17 00:27:37.452832 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:27:37.452858 kernel: GPT:17805311 != 33554431 Jan 17 00:27:37.452883 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:27:37.452907 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:27:37.452932 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 17 00:27:37.315647 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:27:37.336838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:27:37.337114 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:27:37.347913 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:27:37.381229 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:27:37.512774 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (459) Jan 17 00:27:37.519276 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:27:37.534896 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (450) Jan 17 00:27:37.557816 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 17 00:27:37.580519 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 17 00:27:37.586102 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 17 00:27:37.612921 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 17 00:27:37.640702 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 00:27:37.666974 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:27:37.676959 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:27:37.709209 disk-uuid[542]: Primary Header is updated. Jan 17 00:27:37.709209 disk-uuid[542]: Secondary Entries is updated. Jan 17 00:27:37.709209 disk-uuid[542]: Secondary Header is updated. Jan 17 00:27:37.734989 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:27:37.755747 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:27:37.774664 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:27:37.803907 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:27:38.776744 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:27:38.777224 disk-uuid[543]: The operation has completed successfully. Jan 17 00:27:38.849672 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:27:38.849849 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:27:38.884101 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:27:38.912260 sh[568]: Success Jan 17 00:27:38.917881 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:27:39.000309 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:27:39.007848 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:27:39.036920 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:27:39.074892 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:27:39.074977 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:27:39.075003 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:27:39.084326 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:27:39.091258 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:27:39.126772 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:27:39.132361 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:27:39.133371 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:27:39.138949 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:27:39.152937 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:27:39.219659 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:27:39.219768 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:27:39.219794 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:27:39.238018 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:27:39.238106 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:27:39.262527 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:27:39.262040 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:27:39.282707 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:27:39.301029 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:27:39.394113 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:27:39.402069 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:27:39.506666 ignition[675]: Ignition 2.19.0 Jan 17 00:27:39.507144 ignition[675]: Stage: fetch-offline Jan 17 00:27:39.509433 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:27:39.507233 ignition[675]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:39.511056 systemd-networkd[752]: lo: Link UP Jan 17 00:27:39.507252 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:27:39.511061 systemd-networkd[752]: lo: Gained carrier Jan 17 00:27:39.507408 ignition[675]: parsed url from cmdline: "" Jan 17 00:27:39.512678 systemd-networkd[752]: Enumeration completed Jan 17 00:27:39.507416 ignition[675]: no config URL provided Jan 17 00:27:39.513285 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:27:39.507426 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:27:39.513293 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:27:39.507441 ignition[675]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:27:39.515286 systemd-networkd[752]: eth0: Link UP Jan 17 00:27:39.507454 ignition[675]: failed to fetch config: resource requires networking Jan 17 00:27:39.515294 systemd-networkd[752]: eth0: Gained carrier Jan 17 00:27:39.507856 ignition[675]: Ignition finished successfully Jan 17 00:27:39.515307 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:27:39.608279 ignition[761]: Ignition 2.19.0 Jan 17 00:27:39.528836 systemd-networkd[752]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9' Jan 17 00:27:39.608288 ignition[761]: Stage: fetch Jan 17 00:27:39.528857 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.62/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 00:27:39.608492 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:39.542153 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:27:39.608505 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:27:39.560614 systemd[1]: Reached target network.target - Network. Jan 17 00:27:39.608621 ignition[761]: parsed url from cmdline: "" Jan 17 00:27:39.574129 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:27:39.608636 ignition[761]: no config URL provided Jan 17 00:27:39.621157 unknown[761]: fetched base config from "system" Jan 17 00:27:39.608643 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:27:39.621171 unknown[761]: fetched base config from "system" Jan 17 00:27:39.608654 ignition[761]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:27:39.621189 unknown[761]: fetched user config from "gcp" Jan 17 00:27:39.608678 ignition[761]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 17 00:27:39.624201 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:27:39.613900 ignition[761]: GET result: OK Jan 17 00:27:39.641088 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:27:39.613972 ignition[761]: parsing config with SHA512: 06ab2ba7c3232437da8087a634573e2be7b7ad287d98e5a2d8577df52e0551056c96f62e27ced66f7a1ddd51cf62240240598ed8d387035404060a298d7267ae Jan 17 00:27:39.691054 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:27:39.622305 ignition[761]: fetch: fetch complete Jan 17 00:27:39.713981 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:27:39.622321 ignition[761]: fetch: fetch passed Jan 17 00:27:39.748357 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:27:39.622394 ignition[761]: Ignition finished successfully Jan 17 00:27:39.765166 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:27:39.688340 ignition[768]: Ignition 2.19.0 Jan 17 00:27:39.799939 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:27:39.688349 ignition[768]: Stage: kargs Jan 17 00:27:39.815939 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:27:39.688614 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:39.836937 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:27:39.688630 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:27:39.853938 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:27:39.689768 ignition[768]: kargs: kargs passed Jan 17 00:27:39.877932 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:27:39.689831 ignition[768]: Ignition finished successfully Jan 17 00:27:39.734534 ignition[774]: Ignition 2.19.0 Jan 17 00:27:39.734544 ignition[774]: Stage: disks Jan 17 00:27:39.734780 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:39.734793 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:27:39.735964 ignition[774]: disks: disks passed Jan 17 00:27:39.736025 ignition[774]: Ignition finished successfully Jan 17 00:27:39.923506 systemd-fsck[782]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 00:27:40.127015 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:27:40.159902 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:27:40.279839 kernel: EXT4-fs (sda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:27:40.280730 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:27:40.281641 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:27:40.300851 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:27:40.330258 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:27:40.339418 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:27:40.393928 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (790) Jan 17 00:27:40.393985 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:27:40.394012 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:27:40.394036 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:27:40.339495 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:27:40.435899 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:27:40.435936 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:27:40.339531 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:27:40.420028 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:27:40.464165 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:27:40.469965 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:27:40.530900 systemd-networkd[752]: eth0: Gained IPv6LL Jan 17 00:27:40.630212 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:27:40.641195 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:27:40.650906 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:27:40.660903 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:27:40.806026 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:27:40.811955 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:27:40.853087 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:27:40.880894 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:27:40.865084 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:27:40.904184 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:27:40.916014 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:27:40.937956 ignition[903]: INFO : Ignition 2.19.0 Jan 17 00:27:40.937956 ignition[903]: INFO : Stage: mount Jan 17 00:27:40.937956 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:40.937956 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:27:40.937956 ignition[903]: INFO : mount: mount passed Jan 17 00:27:40.937956 ignition[903]: INFO : Ignition finished successfully Jan 17 00:27:40.936914 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:27:40.961992 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:27:41.066953 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (914) Jan 17 00:27:41.066998 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:27:41.067024 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:27:41.067047 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:27:41.067068 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:27:41.067088 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:27:41.066252 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:27:41.101733 ignition[930]: INFO : Ignition 2.19.0 Jan 17 00:27:41.101733 ignition[930]: INFO : Stage: files Jan 17 00:27:41.115959 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:41.115959 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:27:41.115959 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:27:41.115959 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:27:41.115959 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:27:41.115959 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:27:41.115959 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:27:41.196919 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:27:41.196919 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:27:41.196919 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:27:41.196919 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:27:41.196919 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 17 00:27:41.116614 unknown[930]: wrote ssh authorized keys file for user: core Jan 17 00:27:41.282863 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:27:41.473000 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:27:41.473000 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:27:41.504903 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 00:27:41.679025 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 17 00:27:41.818646 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 00:27:42.213037 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 17 00:27:42.621451 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:27:42.621451 ignition[930]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:27:42.657888 ignition[930]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:27:42.657888 ignition[930]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:27:42.657888 ignition[930]: INFO : files: files passed Jan 17 00:27:42.657888 ignition[930]: INFO : Ignition finished successfully Jan 17 00:27:42.625695 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:27:42.645975 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:27:42.675015 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:27:42.729401 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:27:42.970919 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:27:42.970919 initrd-setup-root-after-ignition[959]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:27:42.729532 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:27:43.020905 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:27:42.752311 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:27:42.782219 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:27:42.815957 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:27:42.908413 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:27:42.908536 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:27:42.922214 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:27:42.942002 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:27:42.963132 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:27:42.968952 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:27:43.033460 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:27:43.051981 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:27:43.089687 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:27:43.101055 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:27:43.120110 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:27:43.139079 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:27:43.139296 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:27:43.170180 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:27:43.193105 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:27:43.213155 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:27:43.234095 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:27:43.256082 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:27:43.277119 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:27:43.297117 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:27:43.316085 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:27:43.334158 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:27:43.355134 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:27:43.373001 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:27:43.373231 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:27:43.401173 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:27:43.421208 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:27:43.570885 ignition[983]: INFO : Ignition 2.19.0 Jan 17 00:27:43.570885 ignition[983]: INFO : Stage: umount Jan 17 00:27:43.570885 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:43.570885 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:27:43.570885 ignition[983]: INFO : umount: umount passed Jan 17 00:27:43.570885 ignition[983]: INFO : Ignition finished successfully Jan 17 00:27:43.441997 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:27:43.442202 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:27:43.463095 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:27:43.463299 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:27:43.494113 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:27:43.494346 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:27:43.515175 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:27:43.515373 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:27:43.538985 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:27:43.579010 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:27:43.579315 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:27:43.593978 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:27:43.620886 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:27:43.621157 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:27:43.634272 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:27:43.634519 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:27:43.680930 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:27:43.682163 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:27:43.682287 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:27:43.700674 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:27:43.700834 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:27:43.720494 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:27:43.720626 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:27:43.752031 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:27:43.752108 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:27:43.760184 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:27:43.760314 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:27:43.777212 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:27:43.777281 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:27:43.811132 systemd[1]: Stopped target network.target - Network. Jan 17 00:27:43.821105 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:27:43.821195 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:27:43.838161 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:27:43.856094 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:27:43.859811 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:27:43.873080 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:27:43.893104 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:27:43.920074 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:27:43.920144 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:27:43.945087 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:27:43.945157 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:27:43.953091 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:27:43.953164 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:27:43.970145 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:27:43.970218 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:27:43.987139 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:27:43.987213 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:27:44.021382 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:27:44.026796 systemd-networkd[752]: eth0: DHCPv6 lease lost Jan 17 00:27:44.032315 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:27:44.059351 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:27:44.059492 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:27:44.078501 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:27:44.078943 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:27:44.086701 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:27:44.086843 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:27:44.108857 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:27:44.584912 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 17 00:27:44.136864 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:27:44.136990 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:27:44.155013 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:27:44.155103 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:27:44.174966 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:27:44.175055 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:27:44.194979 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:27:44.195079 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:27:44.214225 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:27:44.237372 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:27:44.237558 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:27:44.257225 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:27:44.257310 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:27:44.278976 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:27:44.279051 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:27:44.288883 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:27:44.288975 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:27:44.314862 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:27:44.314978 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:27:44.340848 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:27:44.340974 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:27:44.376935 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:27:44.389033 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:27:44.389111 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:27:44.407157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:27:44.407253 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:27:44.436567 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:27:44.436699 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:27:44.456341 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:27:44.456462 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:27:44.478297 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:27:44.489998 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:27:44.536254 systemd[1]: Switching root. Jan 17 00:27:44.912917 systemd-journald[184]: Journal stopped Jan 17 00:27:36.106896 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:27:36.106947 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:27:36.106967 kernel: BIOS-provided physical RAM map: Jan 17 00:27:36.106980 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 17 00:27:36.106993 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 17 00:27:36.107006 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 17 00:27:36.107023 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 17 00:27:36.107043 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 17 00:27:36.107056 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 17 00:27:36.107071 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 17 00:27:36.107084 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 17 00:27:36.107098 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 17 00:27:36.107112 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 17 00:27:36.107127 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 17 00:27:36.107151 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 17 00:27:36.107168 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 17 00:27:36.107184 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 17 00:27:36.107201 kernel: NX (Execute Disable) protection: active Jan 17 00:27:36.107227 kernel: APIC: Static calls initialized Jan 17 00:27:36.107244 kernel: efi: EFI v2.7 by EDK II Jan 17 00:27:36.107261 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Jan 17 00:27:36.107278 kernel: SMBIOS 2.4 present. Jan 17 00:27:36.107295 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Jan 17 00:27:36.107312 kernel: Hypervisor detected: KVM Jan 17 00:27:36.107333 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:27:36.107349 kernel: kvm-clock: using sched offset of 13026890038 cycles Jan 17 00:27:36.107367 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:27:36.107384 kernel: tsc: Detected 2299.998 MHz processor Jan 17 00:27:36.107402 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:27:36.107419 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:27:36.107437 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 17 00:27:36.107454 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 17 00:27:36.107470 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:27:36.107491 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 17 00:27:36.107508 kernel: Using GB pages for direct mapping Jan 17 00:27:36.107525 kernel: Secure boot disabled Jan 17 00:27:36.107541 kernel: ACPI: Early table checksum verification disabled Jan 17 00:27:36.107559 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 17 00:27:36.107576 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 17 00:27:36.107594 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 17 00:27:36.107618 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 17 00:27:36.107640 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 17 00:27:36.107659 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Jan 17 00:27:36.107678 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 17 00:27:36.107696 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 17 00:27:36.107728 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 17 00:27:36.107755 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 17 00:27:36.107775 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 17 00:27:36.107789 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 17 00:27:36.107803 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 17 00:27:36.107818 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 17 00:27:36.107833 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 17 00:27:36.107847 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 17 00:27:36.107862 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 17 00:27:36.107877 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 17 00:27:36.107891 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 17 00:27:36.107911 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 17 00:27:36.107926 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:27:36.107943 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:27:36.107959 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 00:27:36.107976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 17 00:27:36.107993 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 17 00:27:36.108009 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 17 00:27:36.108026 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 17 00:27:36.108042 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 17 00:27:36.108064 kernel: Zone ranges: Jan 17 00:27:36.108081 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:27:36.108099 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 00:27:36.108114 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 17 00:27:36.108131 kernel: Movable zone start for each node Jan 17 00:27:36.108168 kernel: Early memory node ranges Jan 17 00:27:36.108186 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 17 00:27:36.108203 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 17 00:27:36.108229 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 17 00:27:36.108252 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 17 00:27:36.108270 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 17 00:27:36.108287 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 17 00:27:36.108304 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:27:36.108321 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 17 00:27:36.108339 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 17 00:27:36.108355 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 00:27:36.108373 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 17 00:27:36.108390 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 00:27:36.108411 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:27:36.108429 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:27:36.108446 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:27:36.108464 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:27:36.108481 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:27:36.108498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:27:36.108516 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:27:36.108533 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:27:36.108550 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:27:36.108571 kernel: Booting paravirtualized kernel on KVM Jan 17 00:27:36.108589 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:27:36.108607 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:27:36.108624 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:27:36.108641 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:27:36.108658 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:27:36.108675 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:27:36.108692 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:27:36.108712 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:27:36.108757 kernel: random: crng init done Jan 17 00:27:36.108774 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 00:27:36.108792 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:27:36.108809 kernel: Fallback order for Node 0: 0 Jan 17 00:27:36.108827 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 17 00:27:36.108844 kernel: Policy zone: Normal Jan 17 00:27:36.108862 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:27:36.108879 kernel: software IO TLB: area num 2. Jan 17 00:27:36.108896 kernel: Memory: 7513176K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 347148K reserved, 0K cma-reserved) Jan 17 00:27:36.108918 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:27:36.108935 kernel: Kernel/User page tables isolation: enabled Jan 17 00:27:36.108951 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:27:36.108968 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:27:36.108985 kernel: Dynamic Preempt: voluntary Jan 17 00:27:36.109003 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:27:36.109021 kernel: rcu: RCU event tracing is enabled. Jan 17 00:27:36.109039 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:27:36.109075 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:27:36.109093 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:27:36.109112 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:27:36.109134 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:27:36.109152 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:27:36.109171 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:27:36.109189 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:27:36.109215 kernel: Console: colour dummy device 80x25 Jan 17 00:27:36.109237 kernel: printk: console [ttyS0] enabled Jan 17 00:27:36.109256 kernel: ACPI: Core revision 20230628 Jan 17 00:27:36.109274 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:27:36.109292 kernel: x2apic enabled Jan 17 00:27:36.109311 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:27:36.109329 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 17 00:27:36.109348 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 00:27:36.109366 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 17 00:27:36.109385 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 17 00:27:36.109407 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 17 00:27:36.109424 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:27:36.109443 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 00:27:36.109463 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 00:27:36.109482 kernel: Spectre V2 : Mitigation: IBRS Jan 17 00:27:36.109500 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:27:36.109519 kernel: RETBleed: Mitigation: IBRS Jan 17 00:27:36.109538 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:27:36.109558 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 17 00:27:36.109582 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:27:36.109601 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 00:27:36.109621 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:27:36.109640 kernel: active return thunk: its_return_thunk Jan 17 00:27:36.109660 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:27:36.109680 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:27:36.109699 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:27:36.109741 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:27:36.109761 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:27:36.109786 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 00:27:36.109805 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:27:36.109825 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:27:36.109844 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:27:36.109863 kernel: landlock: Up and running. Jan 17 00:27:36.109883 kernel: SELinux: Initializing. Jan 17 00:27:36.109902 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:27:36.109932 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:27:36.109952 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 17 00:27:36.109975 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:27:36.109995 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:27:36.110022 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:27:36.110041 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 17 00:27:36.110061 kernel: signal: max sigframe size: 1776 Jan 17 00:27:36.110086 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:27:36.110106 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:27:36.110125 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:27:36.110145 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:27:36.110175 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:27:36.110195 kernel: .... node #0, CPUs: #1 Jan 17 00:27:36.110221 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 00:27:36.110241 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:27:36.110262 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:27:36.110280 kernel: smpboot: Max logical packages: 1 Jan 17 00:27:36.110301 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 17 00:27:36.110320 kernel: devtmpfs: initialized Jan 17 00:27:36.110344 kernel: x86/mm: Memory block size: 128MB Jan 17 00:27:36.110364 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 17 00:27:36.110384 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:27:36.110403 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:27:36.110423 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:27:36.110443 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:27:36.110462 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:27:36.110482 kernel: audit: type=2000 audit(1768609654.845:1): state=initialized audit_enabled=0 res=1 Jan 17 00:27:36.110501 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:27:36.110525 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:27:36.110544 kernel: cpuidle: using governor menu Jan 17 00:27:36.110563 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:27:36.110588 kernel: dca service started, version 1.12.1 Jan 17 00:27:36.110608 kernel: PCI: Using configuration type 1 for base access Jan 17 00:27:36.110627 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:27:36.110647 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:27:36.110666 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:27:36.110686 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:27:36.110710 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:27:36.110763 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:27:36.110783 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:27:36.110802 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:27:36.110822 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 00:27:36.110841 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:27:36.110861 kernel: ACPI: Interpreter enabled Jan 17 00:27:36.110881 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:27:36.110900 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:27:36.110925 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:27:36.110945 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 00:27:36.110963 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 00:27:36.110983 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:27:36.111266 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:27:36.111473 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:27:36.111659 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:27:36.111688 kernel: PCI host bridge to bus 0000:00 Jan 17 00:27:36.111916 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:27:36.112096 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:27:36.112280 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:27:36.112453 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 17 00:27:36.112643 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:27:36.112890 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:27:36.113105 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 17 00:27:36.113321 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 00:27:36.113515 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 00:27:36.113757 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 17 00:27:36.113960 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 17 00:27:36.114154 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 17 00:27:36.114371 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:27:36.114567 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 17 00:27:36.114794 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 17 00:27:36.114999 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:27:36.115192 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 17 00:27:36.115389 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 17 00:27:36.115414 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:27:36.115441 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:27:36.115461 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:27:36.115481 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:27:36.115501 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:27:36.115520 kernel: iommu: Default domain type: Translated Jan 17 00:27:36.115540 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:27:36.115559 kernel: efivars: Registered efivars operations Jan 17 00:27:36.115579 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:27:36.115599 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:27:36.115623 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 17 00:27:36.115642 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 17 00:27:36.115671 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 17 00:27:36.115689 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 17 00:27:36.115729 kernel: vgaarb: loaded Jan 17 00:27:36.115760 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:27:36.115780 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:27:36.115799 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:27:36.115819 kernel: pnp: PnP ACPI init Jan 17 00:27:36.115844 kernel: pnp: PnP ACPI: found 7 devices Jan 17 00:27:36.115864 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:27:36.115884 kernel: NET: Registered PF_INET protocol family Jan 17 00:27:36.115905 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:27:36.115924 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 00:27:36.115944 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:27:36.115963 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:27:36.115983 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 00:27:36.116003 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 00:27:36.116026 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:27:36.116046 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:27:36.116065 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:27:36.116085 kernel: NET: Registered PF_XDP protocol family Jan 17 00:27:36.116290 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:27:36.116469 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:27:36.116641 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:27:36.116846 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 17 00:27:36.117050 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:27:36.117075 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:27:36.117095 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 00:27:36.117114 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 17 00:27:36.117133 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:27:36.117151 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 00:27:36.117168 kernel: clocksource: Switched to clocksource tsc Jan 17 00:27:36.117184 kernel: Initialise system trusted keyrings Jan 17 00:27:36.117206 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 00:27:36.117234 kernel: Key type asymmetric registered Jan 17 00:27:36.117250 kernel: Asymmetric key parser 'x509' registered Jan 17 00:27:36.117268 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:27:36.117285 kernel: io scheduler mq-deadline registered Jan 17 00:27:36.117302 kernel: io scheduler kyber registered Jan 17 00:27:36.117320 kernel: io scheduler bfq registered Jan 17 00:27:36.117337 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:27:36.117356 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 00:27:36.117559 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 17 00:27:36.117581 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 17 00:27:36.118219 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 17 00:27:36.118253 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 00:27:36.118448 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 17 00:27:36.118473 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:27:36.118493 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:27:36.118512 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 00:27:36.118531 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 17 00:27:36.118556 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 17 00:27:36.118825 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 17 00:27:36.118855 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:27:36.118875 kernel: i8042: Warning: Keylock active Jan 17 00:27:36.118894 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:27:36.118914 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:27:36.119111 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 00:27:36.119306 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 00:27:36.119482 kernel: rtc_cmos 00:00: setting system clock to 2026-01-17T00:27:35 UTC (1768609655) Jan 17 00:27:36.119656 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 00:27:36.119681 kernel: intel_pstate: CPU model not supported Jan 17 00:27:36.119701 kernel: pstore: Using crash dump compression: deflate Jan 17 00:27:36.120331 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:27:36.120356 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:27:36.120374 kernel: Segment Routing with IPv6 Jan 17 00:27:36.120393 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:27:36.120418 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:27:36.120437 kernel: Key type dns_resolver registered Jan 17 00:27:36.120455 kernel: IPI shorthand broadcast: enabled Jan 17 00:27:36.120475 kernel: sched_clock: Marking stable (873004135, 144264769)->(1064047094, -46778190) Jan 17 00:27:36.120496 kernel: registered taskstats version 1 Jan 17 00:27:36.120516 kernel: Loading compiled-in X.509 certificates Jan 17 00:27:36.120533 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:27:36.120551 kernel: Key type .fscrypt registered Jan 17 00:27:36.120570 kernel: Key type fscrypt-provisioning registered Jan 17 00:27:36.120593 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:27:36.120612 kernel: ima: No architecture policies found Jan 17 00:27:36.120631 kernel: clk: Disabling unused clocks Jan 17 00:27:36.120651 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:27:36.120671 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:27:36.120691 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:27:36.120710 kernel: Run /init as init process Jan 17 00:27:36.120770 kernel: with arguments: Jan 17 00:27:36.120788 kernel: /init Jan 17 00:27:36.120811 kernel: with environment: Jan 17 00:27:36.120827 kernel: HOME=/ Jan 17 00:27:36.120877 kernel: TERM=linux Jan 17 00:27:36.120894 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:27:36.120915 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:27:36.120939 systemd[1]: Detected virtualization google. Jan 17 00:27:36.120958 systemd[1]: Detected architecture x86-64. Jan 17 00:27:36.120983 systemd[1]: Running in initrd. Jan 17 00:27:36.121001 systemd[1]: No hostname configured, using default hostname. Jan 17 00:27:36.121022 systemd[1]: Hostname set to . Jan 17 00:27:36.121041 systemd[1]: Initializing machine ID from random generator. Jan 17 00:27:36.121060 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:27:36.121078 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:27:36.121096 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:27:36.121116 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:27:36.121139 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:27:36.121162 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:27:36.121184 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:27:36.121203 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:27:36.121759 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:27:36.121793 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:27:36.121814 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:27:36.121840 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:27:36.121859 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:27:36.121899 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:27:36.121924 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:27:36.121944 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:27:36.121964 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:27:36.121989 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:27:36.122008 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:27:36.122030 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:27:36.122051 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:27:36.122072 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:27:36.122092 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:27:36.122110 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:27:36.122129 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:27:36.122147 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:27:36.122170 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:27:36.122189 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:27:36.122218 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:27:36.122237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:27:36.122255 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:27:36.122307 systemd-journald[184]: Collecting audit messages is disabled. Jan 17 00:27:36.122360 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:27:36.122380 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:27:36.122403 systemd-journald[184]: Journal started Jan 17 00:27:36.122450 systemd-journald[184]: Runtime Journal (/run/log/journal/b5f91ae213834b20bea6cfc49db0a2b0) is 8.0M, max 148.7M, 140.7M free. Jan 17 00:27:36.137744 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:27:36.141651 systemd-modules-load[185]: Inserted module 'overlay' Jan 17 00:27:36.146871 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:27:36.149253 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:27:36.180753 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:27:36.182167 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:27:36.183984 kernel: Bridge firewalling registered Jan 17 00:27:36.184179 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 17 00:27:36.185542 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:27:36.186305 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:27:36.186955 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:27:36.190304 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:27:36.200858 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:27:36.221256 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:27:36.227434 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:27:36.236289 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:27:36.241358 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:27:36.251991 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:27:36.260957 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:27:36.281366 dracut-cmdline[216]: dracut-dracut-053 Jan 17 00:27:36.286137 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:27:36.317518 systemd-resolved[218]: Positive Trust Anchors: Jan 17 00:27:36.318172 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:27:36.318380 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:27:36.324972 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 17 00:27:36.328565 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:27:36.342503 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:27:36.395769 kernel: SCSI subsystem initialized Jan 17 00:27:36.407765 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:27:36.420759 kernel: iscsi: registered transport (tcp) Jan 17 00:27:36.445757 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:27:36.445839 kernel: QLogic iSCSI HBA Driver Jan 17 00:27:36.499998 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:27:36.506974 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:27:36.549779 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:27:36.549870 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:27:36.551773 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:27:36.596758 kernel: raid6: avx2x4 gen() 17985 MB/s Jan 17 00:27:36.613754 kernel: raid6: avx2x2 gen() 18017 MB/s Jan 17 00:27:36.631145 kernel: raid6: avx2x1 gen() 14058 MB/s Jan 17 00:27:36.631187 kernel: raid6: using algorithm avx2x2 gen() 18017 MB/s Jan 17 00:27:36.649147 kernel: raid6: .... xor() 17578 MB/s, rmw enabled Jan 17 00:27:36.649207 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:27:36.672764 kernel: xor: automatically using best checksumming function avx Jan 17 00:27:36.846773 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:27:36.861372 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:27:36.871974 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:27:36.887997 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 17 00:27:36.894959 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:27:36.908011 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:27:36.939141 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jan 17 00:27:36.979708 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:27:36.984046 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:27:37.080486 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:27:37.094967 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:27:37.132064 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:27:37.136226 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:27:37.139405 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:27:37.147839 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:27:37.160961 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:27:37.191747 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:27:37.191863 kernel: blk-mq: reduced tag depth to 10240 Jan 17 00:27:37.201744 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 17 00:27:37.228338 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:27:37.248903 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:27:37.272254 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:27:37.395926 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:27:37.395975 kernel: AES CTR mode by8 optimization enabled Jan 17 00:27:37.396008 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Jan 17 00:27:37.396360 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 17 00:27:37.396609 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 17 00:27:37.396872 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 17 00:27:37.397125 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 00:27:37.397356 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:27:37.272450 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:27:37.452791 kernel: GPT:17805311 != 33554431 Jan 17 00:27:37.452832 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:27:37.452858 kernel: GPT:17805311 != 33554431 Jan 17 00:27:37.452883 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:27:37.452907 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:27:37.452932 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 17 00:27:37.315647 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:27:37.336838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:27:37.337114 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:27:37.347913 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:27:37.381229 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:27:37.512774 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (459) Jan 17 00:27:37.519276 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:27:37.534896 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (450) Jan 17 00:27:37.557816 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 17 00:27:37.580519 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 17 00:27:37.586102 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 17 00:27:37.612921 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 17 00:27:37.640702 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 00:27:37.666974 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:27:37.676959 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:27:37.709209 disk-uuid[542]: Primary Header is updated. Jan 17 00:27:37.709209 disk-uuid[542]: Secondary Entries is updated. Jan 17 00:27:37.709209 disk-uuid[542]: Secondary Header is updated. Jan 17 00:27:37.734989 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:27:37.755747 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:27:37.774664 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:27:37.803907 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:27:38.776744 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:27:38.777224 disk-uuid[543]: The operation has completed successfully. Jan 17 00:27:38.849672 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:27:38.849849 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:27:38.884101 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:27:38.912260 sh[568]: Success Jan 17 00:27:38.917881 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:27:39.000309 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:27:39.007848 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:27:39.036920 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:27:39.074892 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:27:39.074977 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:27:39.075003 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:27:39.084326 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:27:39.091258 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:27:39.126772 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:27:39.132361 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:27:39.133371 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:27:39.138949 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:27:39.152937 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:27:39.219659 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:27:39.219768 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:27:39.219794 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:27:39.238018 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:27:39.238106 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:27:39.262527 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:27:39.262040 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:27:39.282707 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:27:39.301029 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:27:39.394113 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:27:39.402069 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:27:39.506666 ignition[675]: Ignition 2.19.0 Jan 17 00:27:39.507144 ignition[675]: Stage: fetch-offline Jan 17 00:27:39.509433 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:27:39.507233 ignition[675]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:39.511056 systemd-networkd[752]: lo: Link UP Jan 17 00:27:39.507252 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:27:39.511061 systemd-networkd[752]: lo: Gained carrier Jan 17 00:27:39.507408 ignition[675]: parsed url from cmdline: "" Jan 17 00:27:39.512678 systemd-networkd[752]: Enumeration completed Jan 17 00:27:39.507416 ignition[675]: no config URL provided Jan 17 00:27:39.513285 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:27:39.507426 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:27:39.513293 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:27:39.507441 ignition[675]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:27:39.515286 systemd-networkd[752]: eth0: Link UP Jan 17 00:27:39.507454 ignition[675]: failed to fetch config: resource requires networking Jan 17 00:27:39.515294 systemd-networkd[752]: eth0: Gained carrier Jan 17 00:27:39.507856 ignition[675]: Ignition finished successfully Jan 17 00:27:39.515307 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:27:39.608279 ignition[761]: Ignition 2.19.0 Jan 17 00:27:39.528836 systemd-networkd[752]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9' Jan 17 00:27:39.608288 ignition[761]: Stage: fetch Jan 17 00:27:39.528857 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.62/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 00:27:39.608492 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:39.542153 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:27:39.608505 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:27:39.560614 systemd[1]: Reached target network.target - Network. Jan 17 00:27:39.608621 ignition[761]: parsed url from cmdline: "" Jan 17 00:27:39.574129 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:27:39.608636 ignition[761]: no config URL provided Jan 17 00:27:39.621157 unknown[761]: fetched base config from "system" Jan 17 00:27:39.608643 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:27:39.621171 unknown[761]: fetched base config from "system" Jan 17 00:27:39.608654 ignition[761]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:27:39.621189 unknown[761]: fetched user config from "gcp" Jan 17 00:27:39.608678 ignition[761]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 17 00:27:39.624201 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:27:39.613900 ignition[761]: GET result: OK Jan 17 00:27:39.641088 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:27:39.613972 ignition[761]: parsing config with SHA512: 06ab2ba7c3232437da8087a634573e2be7b7ad287d98e5a2d8577df52e0551056c96f62e27ced66f7a1ddd51cf62240240598ed8d387035404060a298d7267ae Jan 17 00:27:39.691054 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:27:39.622305 ignition[761]: fetch: fetch complete Jan 17 00:27:39.713981 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:27:39.622321 ignition[761]: fetch: fetch passed Jan 17 00:27:39.748357 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:27:39.622394 ignition[761]: Ignition finished successfully Jan 17 00:27:39.765166 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:27:39.688340 ignition[768]: Ignition 2.19.0 Jan 17 00:27:39.799939 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:27:39.688349 ignition[768]: Stage: kargs Jan 17 00:27:39.815939 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:27:39.688614 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:39.836937 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:27:39.688630 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:27:39.853938 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:27:39.689768 ignition[768]: kargs: kargs passed Jan 17 00:27:39.877932 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:27:39.689831 ignition[768]: Ignition finished successfully Jan 17 00:27:39.734534 ignition[774]: Ignition 2.19.0 Jan 17 00:27:39.734544 ignition[774]: Stage: disks Jan 17 00:27:39.734780 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:39.734793 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:27:39.735964 ignition[774]: disks: disks passed Jan 17 00:27:39.736025 ignition[774]: Ignition finished successfully Jan 17 00:27:39.923506 systemd-fsck[782]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 00:27:40.127015 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:27:40.159902 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:27:40.279839 kernel: EXT4-fs (sda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:27:40.280730 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:27:40.281641 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:27:40.300851 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:27:40.330258 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:27:40.339418 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:27:40.393928 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (790) Jan 17 00:27:40.393985 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:27:40.394012 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:27:40.394036 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:27:40.339495 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:27:40.435899 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:27:40.435936 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:27:40.339531 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:27:40.420028 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:27:40.464165 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:27:40.469965 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:27:40.530900 systemd-networkd[752]: eth0: Gained IPv6LL Jan 17 00:27:40.630212 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:27:40.641195 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:27:40.650906 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:27:40.660903 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:27:40.806026 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:27:40.811955 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:27:40.853087 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:27:40.880894 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:27:40.865084 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:27:40.904184 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:27:40.916014 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:27:40.937956 ignition[903]: INFO : Ignition 2.19.0 Jan 17 00:27:40.937956 ignition[903]: INFO : Stage: mount Jan 17 00:27:40.937956 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:40.937956 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:27:40.937956 ignition[903]: INFO : mount: mount passed Jan 17 00:27:40.937956 ignition[903]: INFO : Ignition finished successfully Jan 17 00:27:40.936914 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:27:40.961992 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:27:41.066953 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (914) Jan 17 00:27:41.066998 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:27:41.067024 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:27:41.067047 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:27:41.067068 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:27:41.067088 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:27:41.066252 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:27:41.101733 ignition[930]: INFO : Ignition 2.19.0 Jan 17 00:27:41.101733 ignition[930]: INFO : Stage: files Jan 17 00:27:41.115959 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:41.115959 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:27:41.115959 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:27:41.115959 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:27:41.115959 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:27:41.115959 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:27:41.115959 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:27:41.196919 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:27:41.196919 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:27:41.196919 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:27:41.196919 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:27:41.196919 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 17 00:27:41.116614 unknown[930]: wrote ssh authorized keys file for user: core Jan 17 00:27:41.282863 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:27:41.473000 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:27:41.473000 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:27:41.504903 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 00:27:41.679025 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 17 00:27:41.818646 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:27:41.833865 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 00:27:42.213037 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 17 00:27:42.621451 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:27:42.621451 ignition[930]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:27:42.657888 ignition[930]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:27:42.657888 ignition[930]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:27:42.657888 ignition[930]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:27:42.657888 ignition[930]: INFO : files: files passed Jan 17 00:27:42.657888 ignition[930]: INFO : Ignition finished successfully Jan 17 00:27:42.625695 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:27:42.645975 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:27:42.675015 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:27:42.729401 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:27:42.970919 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:27:42.970919 initrd-setup-root-after-ignition[959]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:27:42.729532 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:27:43.020905 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:27:42.752311 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:27:42.782219 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:27:42.815957 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:27:42.908413 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:27:42.908536 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:27:42.922214 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:27:42.942002 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:27:42.963132 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:27:42.968952 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:27:43.033460 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:27:43.051981 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:27:43.089687 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:27:43.101055 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:27:43.120110 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:27:43.139079 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:27:43.139296 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:27:43.170180 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:27:43.193105 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:27:43.213155 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:27:43.234095 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:27:43.256082 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:27:43.277119 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:27:43.297117 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:27:43.316085 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:27:43.334158 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:27:43.355134 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:27:43.373001 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:27:43.373231 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:27:43.401173 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:27:43.421208 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:27:43.570885 ignition[983]: INFO : Ignition 2.19.0 Jan 17 00:27:43.570885 ignition[983]: INFO : Stage: umount Jan 17 00:27:43.570885 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:27:43.570885 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:27:43.570885 ignition[983]: INFO : umount: umount passed Jan 17 00:27:43.570885 ignition[983]: INFO : Ignition finished successfully Jan 17 00:27:43.441997 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:27:43.442202 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:27:43.463095 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:27:43.463299 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:27:43.494113 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:27:43.494346 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:27:43.515175 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:27:43.515373 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:27:43.538985 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:27:43.579010 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:27:43.579315 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:27:43.593978 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:27:43.620886 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:27:43.621157 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:27:43.634272 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:27:43.634519 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:27:43.680930 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:27:43.682163 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:27:43.682287 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:27:43.700674 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:27:43.700834 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:27:43.720494 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:27:43.720626 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:27:43.752031 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:27:43.752108 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:27:43.760184 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:27:43.760314 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:27:43.777212 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:27:43.777281 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:27:43.811132 systemd[1]: Stopped target network.target - Network. Jan 17 00:27:43.821105 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:27:43.821195 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:27:43.838161 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:27:43.856094 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:27:43.859811 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:27:43.873080 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:27:43.893104 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:27:43.920074 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:27:43.920144 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:27:43.945087 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:27:43.945157 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:27:43.953091 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:27:43.953164 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:27:43.970145 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:27:43.970218 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:27:43.987139 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:27:43.987213 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:27:44.021382 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:27:44.026796 systemd-networkd[752]: eth0: DHCPv6 lease lost Jan 17 00:27:44.032315 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:27:44.059351 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:27:44.059492 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:27:44.078501 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:27:44.078943 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:27:44.086701 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:27:44.086843 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:27:44.108857 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:27:44.584912 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 17 00:27:44.136864 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:27:44.136990 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:27:44.155013 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:27:44.155103 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:27:44.174966 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:27:44.175055 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:27:44.194979 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:27:44.195079 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:27:44.214225 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:27:44.237372 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:27:44.237558 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:27:44.257225 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:27:44.257310 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:27:44.278976 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:27:44.279051 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:27:44.288883 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:27:44.288975 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:27:44.314862 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:27:44.314978 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:27:44.340848 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:27:44.340974 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:27:44.376935 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:27:44.389033 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:27:44.389111 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:27:44.407157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:27:44.407253 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:27:44.436567 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:27:44.436699 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:27:44.456341 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:27:44.456462 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:27:44.478297 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:27:44.489998 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:27:44.536254 systemd[1]: Switching root. Jan 17 00:27:44.912917 systemd-journald[184]: Journal stopped Jan 17 00:27:47.416554 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:27:47.416618 kernel: SELinux: policy capability open_perms=1 Jan 17 00:27:47.416641 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:27:47.416659 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:27:47.416677 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:27:47.416694 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:27:47.416731 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:27:47.416754 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:27:47.416772 kernel: audit: type=1403 audit(1768609665.415:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:27:47.416793 systemd[1]: Successfully loaded SELinux policy in 84.728ms. Jan 17 00:27:47.416815 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.731ms. Jan 17 00:27:47.416838 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:27:47.416858 systemd[1]: Detected virtualization google. Jan 17 00:27:47.416879 systemd[1]: Detected architecture x86-64. Jan 17 00:27:47.416905 systemd[1]: Detected first boot. Jan 17 00:27:47.416927 systemd[1]: Initializing machine ID from random generator. Jan 17 00:27:47.416948 zram_generator::config[1042]: No configuration found. Jan 17 00:27:47.416974 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:27:47.416994 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:27:47.417019 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:27:47.417040 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:27:47.417061 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:27:47.417081 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:27:47.417100 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:27:47.417122 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:27:47.417143 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:27:47.417168 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:27:47.417196 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:27:47.417217 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:27:47.417239 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:27:47.417262 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:27:47.417285 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:27:47.417307 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:27:47.417331 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:27:47.417357 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:27:47.417377 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:27:47.417398 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:27:47.417420 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:27:47.417444 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:27:47.417464 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:27:47.417495 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:27:47.417520 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:27:47.417544 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:27:47.417573 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:27:47.417597 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:27:47.417621 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:27:47.417644 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:27:47.417666 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:27:47.417690 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:27:47.417731 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:27:47.417764 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:27:47.417789 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:27:47.417812 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:27:47.417837 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:27:47.417868 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:27:47.417892 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:27:47.417916 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:27:47.417940 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:27:47.417964 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:27:47.417990 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:27:47.418015 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:27:47.418038 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:27:47.418061 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:27:47.418091 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:27:47.418115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:27:47.418139 kernel: ACPI: bus type drm_connector registered Jan 17 00:27:47.418161 kernel: fuse: init (API version 7.39) Jan 17 00:27:47.418191 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:27:47.418217 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 00:27:47.418242 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 00:27:47.418266 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:27:47.418295 kernel: loop: module loaded Jan 17 00:27:47.418317 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:27:47.418343 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:27:47.418404 systemd-journald[1147]: Collecting audit messages is disabled. Jan 17 00:27:47.418456 systemd-journald[1147]: Journal started Jan 17 00:27:47.418501 systemd-journald[1147]: Runtime Journal (/run/log/journal/14738315fdae498190ba8dbd817b77de) is 8.0M, max 148.7M, 140.7M free. Jan 17 00:27:47.428783 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:27:47.462761 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:27:47.490742 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:27:47.500779 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:27:47.512442 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:27:47.522129 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:27:47.532131 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:27:47.542167 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:27:47.552089 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:27:47.562056 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:27:47.572419 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:27:47.585309 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:27:47.597222 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:27:47.597489 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:27:47.609303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:27:47.609577 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:27:47.621251 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:27:47.621514 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:27:47.632282 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:27:47.632555 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:27:47.644236 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:27:47.644501 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:27:47.654237 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:27:47.654526 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:27:47.665372 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:27:47.675450 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:27:47.687330 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:27:47.699332 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:27:47.723695 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:27:47.739862 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:27:47.762784 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:27:47.773887 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:27:47.783973 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:27:47.801910 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:27:47.813938 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:27:47.819915 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:27:47.828106 systemd-journald[1147]: Time spent on flushing to /var/log/journal/14738315fdae498190ba8dbd817b77de is 113.904ms for 920 entries. Jan 17 00:27:47.828106 systemd-journald[1147]: System Journal (/var/log/journal/14738315fdae498190ba8dbd817b77de) is 8.0M, max 584.8M, 576.8M free. Jan 17 00:27:47.971892 systemd-journald[1147]: Received client request to flush runtime journal. Jan 17 00:27:47.836663 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:27:47.851951 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:27:47.870965 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:27:47.892331 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:27:47.916957 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:27:47.929101 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:27:47.942429 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:27:47.952257 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:27:47.976314 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:27:47.995330 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:27:47.995421 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:27:47.996389 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 17 00:27:47.996418 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 17 00:27:48.007702 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:27:48.034000 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:27:48.088390 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:27:48.106985 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:27:48.152809 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jan 17 00:27:48.153333 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jan 17 00:27:48.162419 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:27:48.627645 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:27:48.644951 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:27:48.693369 systemd-udevd[1211]: Using default interface naming scheme 'v255'. Jan 17 00:27:48.738188 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:27:48.763987 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:27:48.810045 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:27:48.875277 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 00:27:48.957743 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:27:49.040202 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:27:49.052765 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:27:49.072742 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 17 00:27:49.073143 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 17 00:27:49.090991 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1214) Jan 17 00:27:49.123780 kernel: ACPI: button: Sleep Button [SLPF] Jan 17 00:27:49.148317 systemd-networkd[1221]: lo: Link UP Jan 17 00:27:49.150161 systemd-networkd[1221]: lo: Gained carrier Jan 17 00:27:49.150760 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:27:49.155474 systemd-networkd[1221]: Enumeration completed Jan 17 00:27:49.155872 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:27:49.157426 systemd-networkd[1221]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:27:49.157551 systemd-networkd[1221]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:27:49.159140 systemd-networkd[1221]: eth0: Link UP Jan 17 00:27:49.159251 systemd-networkd[1221]: eth0: Gained carrier Jan 17 00:27:49.159606 systemd-networkd[1221]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:27:49.170884 systemd-networkd[1221]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9' Jan 17 00:27:49.171192 systemd-networkd[1221]: eth0: DHCPv4 address 10.128.0.62/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 00:27:49.190740 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:27:49.221896 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 00:27:49.268751 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:27:49.275530 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 00:27:49.301118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:27:49.323676 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:27:49.330014 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:27:49.357803 lvm[1255]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:27:49.397492 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:27:49.398659 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:27:49.408059 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:27:49.424069 lvm[1259]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:27:49.430373 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:27:49.462374 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:27:49.475325 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:27:49.486911 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:27:49.486963 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:27:49.496906 systemd[1]: Reached target machines.target - Containers. Jan 17 00:27:49.507368 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:27:49.525003 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:27:49.546957 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:27:49.557107 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:27:49.572188 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:27:49.590810 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:27:49.607566 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:27:49.613431 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:27:49.635006 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:27:49.652678 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:27:49.654053 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:27:49.673766 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 00:27:49.745790 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:27:49.780768 kernel: loop1: detected capacity change from 0 to 54824 Jan 17 00:27:49.838826 kernel: loop2: detected capacity change from 0 to 224512 Jan 17 00:27:49.944780 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 00:27:50.019764 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 00:27:50.063085 kernel: loop5: detected capacity change from 0 to 54824 Jan 17 00:27:50.093766 kernel: loop6: detected capacity change from 0 to 224512 Jan 17 00:27:50.138005 kernel: loop7: detected capacity change from 0 to 140768 Jan 17 00:27:50.177592 (sd-merge)[1285]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 17 00:27:50.178708 (sd-merge)[1285]: Merged extensions into '/usr'. Jan 17 00:27:50.204670 systemd[1]: Reloading requested from client PID 1271 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:27:50.204695 systemd[1]: Reloading... Jan 17 00:27:50.319473 zram_generator::config[1309]: No configuration found. Jan 17 00:27:50.514480 ldconfig[1266]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:27:50.550574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:27:50.639501 systemd[1]: Reloading finished in 434 ms. Jan 17 00:27:50.657376 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:27:50.668401 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:27:50.693979 systemd[1]: Starting ensure-sysext.service... Jan 17 00:27:50.705953 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:27:50.706922 systemd-networkd[1221]: eth0: Gained IPv6LL Jan 17 00:27:50.717608 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:27:50.735873 systemd[1]: Reloading requested from client PID 1360 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:27:50.736082 systemd[1]: Reloading... Jan 17 00:27:50.747388 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:27:50.748651 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:27:50.750673 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:27:50.751325 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jan 17 00:27:50.751452 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jan 17 00:27:50.758005 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:27:50.758168 systemd-tmpfiles[1361]: Skipping /boot Jan 17 00:27:50.776571 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:27:50.776853 systemd-tmpfiles[1361]: Skipping /boot Jan 17 00:27:50.859766 zram_generator::config[1387]: No configuration found. Jan 17 00:27:51.017038 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:27:51.107145 systemd[1]: Reloading finished in 370 ms. Jan 17 00:27:51.134581 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:27:51.156200 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:27:51.173536 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:27:51.191921 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:27:51.214491 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:27:51.238121 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:27:51.249548 augenrules[1459]: No rules Jan 17 00:27:51.257617 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:27:51.279689 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:27:51.281351 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:27:51.298297 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:27:51.317759 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:27:51.339410 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:27:51.349026 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:27:51.349619 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:27:51.353733 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:27:51.367165 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:27:51.380188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:27:51.380705 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:27:51.394822 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:27:51.407106 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:27:51.407423 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:27:51.408235 systemd-resolved[1454]: Positive Trust Anchors: Jan 17 00:27:51.408253 systemd-resolved[1454]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:27:51.408317 systemd-resolved[1454]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:27:51.416665 systemd-resolved[1454]: Defaulting to hostname 'linux'. Jan 17 00:27:51.419482 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:27:51.431529 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:27:51.431855 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:27:51.452333 systemd[1]: Reached target network.target - Network. Jan 17 00:27:51.461070 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:27:51.471123 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:27:51.483097 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:27:51.483536 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:27:51.489095 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:27:51.512236 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:27:51.531183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:27:51.553200 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:27:51.577611 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 00:27:51.586036 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:27:51.586383 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:27:51.607206 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:27:51.616930 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:27:51.617185 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:27:51.620699 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:27:51.621130 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:27:51.632577 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:27:51.632880 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:27:51.643536 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:27:51.643873 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:27:51.655570 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:27:51.655876 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:27:51.671699 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:27:51.688351 systemd[1]: Finished ensure-sysext.service. Jan 17 00:27:51.697894 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 00:27:51.718974 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 17 00:27:51.728922 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:27:51.729234 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:27:51.739063 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:27:51.752008 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:27:51.763543 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:27:51.774002 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:27:51.784888 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:27:51.795943 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:27:51.796013 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:27:51.804895 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:27:51.813469 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:27:51.825795 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:27:51.834184 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:27:51.835574 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 17 00:27:51.847151 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:27:51.863954 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:27:51.873870 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:27:51.883888 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:27:51.893202 systemd[1]: System is tainted: cgroupsv1 Jan 17 00:27:51.893291 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:27:51.893329 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:27:51.899870 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:27:51.923772 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:27:51.941176 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:27:51.980317 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:27:52.005374 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:27:52.014900 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:27:52.017760 jq[1525]: false Jan 17 00:27:52.028639 coreos-metadata[1522]: Jan 17 00:27:52.027 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 17 00:27:52.026942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:27:52.033888 coreos-metadata[1522]: Jan 17 00:27:52.031 INFO Fetch successful Jan 17 00:27:52.033888 coreos-metadata[1522]: Jan 17 00:27:52.031 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 17 00:27:52.033888 coreos-metadata[1522]: Jan 17 00:27:52.032 INFO Fetch successful Jan 17 00:27:52.033888 coreos-metadata[1522]: Jan 17 00:27:52.032 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 17 00:27:52.034398 coreos-metadata[1522]: Jan 17 00:27:52.034 INFO Fetch successful Jan 17 00:27:52.034398 coreos-metadata[1522]: Jan 17 00:27:52.034 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 17 00:27:52.037639 coreos-metadata[1522]: Jan 17 00:27:52.034 INFO Fetch successful Jan 17 00:27:52.055088 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:27:52.067961 extend-filesystems[1528]: Found loop4 Jan 17 00:27:52.076942 extend-filesystems[1528]: Found loop5 Jan 17 00:27:52.076942 extend-filesystems[1528]: Found loop6 Jan 17 00:27:52.076942 extend-filesystems[1528]: Found loop7 Jan 17 00:27:52.076942 extend-filesystems[1528]: Found sda Jan 17 00:27:52.076942 extend-filesystems[1528]: Found sda1 Jan 17 00:27:52.076942 extend-filesystems[1528]: Found sda2 Jan 17 00:27:52.076942 extend-filesystems[1528]: Found sda3 Jan 17 00:27:52.076942 extend-filesystems[1528]: Found usr Jan 17 00:27:52.076942 extend-filesystems[1528]: Found sda4 Jan 17 00:27:52.076942 extend-filesystems[1528]: Found sda6 Jan 17 00:27:52.076942 extend-filesystems[1528]: Found sda7 Jan 17 00:27:52.076942 extend-filesystems[1528]: Found sda9 Jan 17 00:27:52.076942 extend-filesystems[1528]: Checking size of /dev/sda9 Jan 17 00:27:52.230038 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Jan 17 00:27:52.094676 dbus-daemon[1524]: [system] SELinux support is enabled Jan 17 00:27:52.230675 extend-filesystems[1528]: Resized partition /dev/sda9 Jan 17 00:27:52.259901 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Jan 17 00:27:52.077502 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 00:27:52.099617 dbus-daemon[1524]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1221 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 00:27:52.268953 extend-filesystems[1547]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:27:52.099252 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: ---------------------------------------------------- Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: corporation. Support and training for ntp-4 are Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: available at https://www.nwtime.org/support Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: ---------------------------------------------------- Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: proto: precision = 0.090 usec (-23) Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: basedate set to 2026-01-04 Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: gps base set to 2026-01-04 (week 2400) Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: Listen normally on 3 eth0 10.128.0.62:123 Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: Listen normally on 4 lo [::1]:123 Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:3e%2]:123 Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: Listening on routing socket on fd #22 for interface updates Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:27:52.287138 ntpd[1537]: 17 Jan 00:27:52 ntpd[1537]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:27:52.182906 ntpd[1537]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:27:52.302212 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1564) Jan 17 00:27:52.302267 extend-filesystems[1547]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 17 00:27:52.302267 extend-filesystems[1547]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 17 00:27:52.302267 extend-filesystems[1547]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Jan 17 00:27:52.132783 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 17 00:27:52.182941 ntpd[1537]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:27:52.333463 extend-filesystems[1528]: Resized filesystem in /dev/sda9 Jan 17 00:27:52.187060 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:27:52.365084 init.sh[1549]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 17 00:27:52.365084 init.sh[1549]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 17 00:27:52.365084 init.sh[1549]: + /usr/bin/google_instance_setup Jan 17 00:27:52.182957 ntpd[1537]: ---------------------------------------------------- Jan 17 00:27:52.207076 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:27:52.182971 ntpd[1537]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:27:52.255611 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:27:52.182985 ntpd[1537]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:27:52.281039 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:27:52.183000 ntpd[1537]: corporation. Support and training for ntp-4 are Jan 17 00:27:52.311594 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 17 00:27:52.183016 ntpd[1537]: available at https://www.nwtime.org/support Jan 17 00:27:52.318043 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:27:52.183031 ntpd[1537]: ---------------------------------------------------- Jan 17 00:27:52.358499 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:27:52.187257 ntpd[1537]: proto: precision = 0.090 usec (-23) Jan 17 00:27:52.190696 ntpd[1537]: basedate set to 2026-01-04 Jan 17 00:27:52.190767 ntpd[1537]: gps base set to 2026-01-04 (week 2400) Jan 17 00:27:52.197536 ntpd[1537]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:27:52.197616 ntpd[1537]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:27:52.200806 ntpd[1537]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:27:52.200912 ntpd[1537]: Listen normally on 3 eth0 10.128.0.62:123 Jan 17 00:27:52.201120 ntpd[1537]: Listen normally on 4 lo [::1]:123 Jan 17 00:27:52.201218 ntpd[1537]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:3e%2]:123 Jan 17 00:27:52.201284 ntpd[1537]: Listening on routing socket on fd #22 for interface updates Jan 17 00:27:52.207569 ntpd[1537]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:27:52.207616 ntpd[1537]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:27:52.377584 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:27:52.404985 update_engine[1576]: I20260117 00:27:52.402985 1576 main.cc:92] Flatcar Update Engine starting Jan 17 00:27:52.409220 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:27:52.409671 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:27:52.414935 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:27:52.415319 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:27:52.420750 update_engine[1576]: I20260117 00:27:52.420075 1576 update_check_scheduler.cc:74] Next update check in 4m10s Jan 17 00:27:52.426533 jq[1579]: true Jan 17 00:27:52.447127 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:27:52.447531 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:27:52.458950 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:27:52.474495 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:27:52.476433 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:27:52.534494 jq[1589]: true Jan 17 00:27:52.539594 systemd-logind[1574]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:27:52.539629 systemd-logind[1574]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 17 00:27:52.539673 systemd-logind[1574]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:27:52.548187 (ntainerd)[1590]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:27:52.552487 systemd-logind[1574]: New seat seat0. Jan 17 00:27:52.563366 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:27:52.574806 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:27:52.628187 dbus-daemon[1524]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:27:52.649333 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:27:52.664476 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:27:52.668636 tar[1587]: linux-amd64/LICENSE Jan 17 00:27:52.668636 tar[1587]: linux-amd64/helm Jan 17 00:27:52.667993 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:27:52.668261 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:27:52.696587 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 00:27:52.708995 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:27:52.709256 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:27:52.722244 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:27:52.741944 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:27:52.770140 bash[1625]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:27:52.855039 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:27:52.883641 systemd[1]: Starting sshkeys.service... Jan 17 00:27:52.939701 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:27:52.966518 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:27:53.129167 coreos-metadata[1631]: Jan 17 00:27:53.126 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 17 00:27:53.130688 coreos-metadata[1631]: Jan 17 00:27:53.130 INFO Fetch failed with 404: resource not found Jan 17 00:27:53.130688 coreos-metadata[1631]: Jan 17 00:27:53.130 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 17 00:27:53.131314 coreos-metadata[1631]: Jan 17 00:27:53.131 INFO Fetch successful Jan 17 00:27:53.131460 coreos-metadata[1631]: Jan 17 00:27:53.131 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 17 00:27:53.131825 coreos-metadata[1631]: Jan 17 00:27:53.131 INFO Fetch failed with 404: resource not found Jan 17 00:27:53.131825 coreos-metadata[1631]: Jan 17 00:27:53.131 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 17 00:27:53.137764 coreos-metadata[1631]: Jan 17 00:27:53.136 INFO Fetch failed with 404: resource not found Jan 17 00:27:53.137764 coreos-metadata[1631]: Jan 17 00:27:53.136 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 17 00:27:53.138423 coreos-metadata[1631]: Jan 17 00:27:53.138 INFO Fetch successful Jan 17 00:27:53.152247 unknown[1631]: wrote ssh authorized keys file for user: core Jan 17 00:27:53.224653 update-ssh-keys[1641]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:27:53.226381 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:27:53.256451 systemd[1]: Finished sshkeys.service. Jan 17 00:27:53.289989 dbus-daemon[1524]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 00:27:53.290280 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 00:27:53.296221 dbus-daemon[1524]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1626 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 00:27:53.317291 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 00:27:53.364772 locksmithd[1627]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:27:53.368157 polkitd[1647]: Started polkitd version 121 Jan 17 00:27:53.422238 polkitd[1647]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 00:27:53.422357 polkitd[1647]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 00:27:53.428461 polkitd[1647]: Finished loading, compiling and executing 2 rules Jan 17 00:27:53.439596 dbus-daemon[1524]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 00:27:53.439884 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 00:27:53.440606 polkitd[1647]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 00:27:53.532749 systemd-hostnamed[1626]: Hostname set to (transient) Jan 17 00:27:53.535568 systemd-resolved[1454]: System hostname changed to 'ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9'. Jan 17 00:27:53.789826 sshd_keygen[1581]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:27:53.825912 containerd[1590]: time="2026-01-17T00:27:53.825600997Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:27:53.902540 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:27:53.923883 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:27:53.963092 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:27:53.963527 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:27:53.984434 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:27:53.991041 containerd[1590]: time="2026-01-17T00:27:53.990138599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:27:53.998478 containerd[1590]: time="2026-01-17T00:27:53.998410671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:27:53.998678 containerd[1590]: time="2026-01-17T00:27:53.998651865Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:27:53.998809 containerd[1590]: time="2026-01-17T00:27:53.998786467Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:27:53.999147 containerd[1590]: time="2026-01-17T00:27:53.999113046Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:27:54.002659 containerd[1590]: time="2026-01-17T00:27:54.001492402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:27:54.002659 containerd[1590]: time="2026-01-17T00:27:54.001626236Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:27:54.002659 containerd[1590]: time="2026-01-17T00:27:54.001651784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:27:54.002659 containerd[1590]: time="2026-01-17T00:27:54.002023323Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:27:54.002659 containerd[1590]: time="2026-01-17T00:27:54.002054755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:27:54.002659 containerd[1590]: time="2026-01-17T00:27:54.002081906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:27:54.002659 containerd[1590]: time="2026-01-17T00:27:54.002100995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:27:54.002659 containerd[1590]: time="2026-01-17T00:27:54.002225179Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:27:54.002659 containerd[1590]: time="2026-01-17T00:27:54.002602369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:27:54.004768 containerd[1590]: time="2026-01-17T00:27:54.004407842Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:27:54.004768 containerd[1590]: time="2026-01-17T00:27:54.004445189Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:27:54.005401 containerd[1590]: time="2026-01-17T00:27:54.005370136Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:27:54.008161 containerd[1590]: time="2026-01-17T00:27:54.007879946Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:27:54.018600 containerd[1590]: time="2026-01-17T00:27:54.018035189Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:27:54.018600 containerd[1590]: time="2026-01-17T00:27:54.018210556Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:27:54.018600 containerd[1590]: time="2026-01-17T00:27:54.018241679Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:27:54.018600 containerd[1590]: time="2026-01-17T00:27:54.018314967Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:27:54.018600 containerd[1590]: time="2026-01-17T00:27:54.018359231Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:27:54.019084 containerd[1590]: time="2026-01-17T00:27:54.019054373Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:27:54.023761 containerd[1590]: time="2026-01-17T00:27:54.023078738Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:27:54.023761 containerd[1590]: time="2026-01-17T00:27:54.023316279Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:27:54.023761 containerd[1590]: time="2026-01-17T00:27:54.023350035Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:27:54.023761 containerd[1590]: time="2026-01-17T00:27:54.023374791Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:27:54.023761 containerd[1590]: time="2026-01-17T00:27:54.023398718Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:27:54.023761 containerd[1590]: time="2026-01-17T00:27:54.023422898Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:27:54.023761 containerd[1590]: time="2026-01-17T00:27:54.023447057Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:27:54.023761 containerd[1590]: time="2026-01-17T00:27:54.023474245Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:27:54.023761 containerd[1590]: time="2026-01-17T00:27:54.023500171Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:27:54.023761 containerd[1590]: time="2026-01-17T00:27:54.023523209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:27:54.023761 containerd[1590]: time="2026-01-17T00:27:54.023544728Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:27:54.023761 containerd[1590]: time="2026-01-17T00:27:54.023566115Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:27:54.023761 containerd[1590]: time="2026-01-17T00:27:54.023600995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.023761 containerd[1590]: time="2026-01-17T00:27:54.023624392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.024422 containerd[1590]: time="2026-01-17T00:27:54.023646179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.024422 containerd[1590]: time="2026-01-17T00:27:54.023676602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.024422 containerd[1590]: time="2026-01-17T00:27:54.023702530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.030145 containerd[1590]: time="2026-01-17T00:27:54.027180041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.030145 containerd[1590]: time="2026-01-17T00:27:54.028825662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.030145 containerd[1590]: time="2026-01-17T00:27:54.028874469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.030145 containerd[1590]: time="2026-01-17T00:27:54.028902148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.030145 containerd[1590]: time="2026-01-17T00:27:54.028930189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.030145 containerd[1590]: time="2026-01-17T00:27:54.028952932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.030145 containerd[1590]: time="2026-01-17T00:27:54.028982447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.030145 containerd[1590]: time="2026-01-17T00:27:54.029007834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.030145 containerd[1590]: time="2026-01-17T00:27:54.029041757Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:27:54.030145 containerd[1590]: time="2026-01-17T00:27:54.029098648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.030145 containerd[1590]: time="2026-01-17T00:27:54.029122013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.030145 containerd[1590]: time="2026-01-17T00:27:54.029141922Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:27:54.030145 containerd[1590]: time="2026-01-17T00:27:54.029218309Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:27:54.030145 containerd[1590]: time="2026-01-17T00:27:54.029248276Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:27:54.030867 containerd[1590]: time="2026-01-17T00:27:54.029268272Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:27:54.030867 containerd[1590]: time="2026-01-17T00:27:54.029298986Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:27:54.030867 containerd[1590]: time="2026-01-17T00:27:54.029319052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.030867 containerd[1590]: time="2026-01-17T00:27:54.029340436Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:27:54.030867 containerd[1590]: time="2026-01-17T00:27:54.029364584Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:27:54.030867 containerd[1590]: time="2026-01-17T00:27:54.029381511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:27:54.031159 containerd[1590]: time="2026-01-17T00:27:54.029874046Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:27:54.031159 containerd[1590]: time="2026-01-17T00:27:54.029979618Z" level=info msg="Connect containerd service" Jan 17 00:27:54.031159 containerd[1590]: time="2026-01-17T00:27:54.030057364Z" level=info msg="using legacy CRI server" Jan 17 00:27:54.031159 containerd[1590]: time="2026-01-17T00:27:54.030070924Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:27:54.039305 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:27:54.048134 containerd[1590]: time="2026-01-17T00:27:54.048074852Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:27:54.058033 containerd[1590]: time="2026-01-17T00:27:54.055464909Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:27:54.060765 containerd[1590]: time="2026-01-17T00:27:54.059451130Z" level=info msg="Start subscribing containerd event" Jan 17 00:27:54.060765 containerd[1590]: time="2026-01-17T00:27:54.059549912Z" level=info msg="Start recovering state" Jan 17 00:27:54.060765 containerd[1590]: time="2026-01-17T00:27:54.059669246Z" level=info msg="Start event monitor" Jan 17 00:27:54.060765 containerd[1590]: time="2026-01-17T00:27:54.059689059Z" level=info msg="Start snapshots syncer" Jan 17 00:27:54.060765 containerd[1590]: time="2026-01-17T00:27:54.059704021Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:27:54.060765 containerd[1590]: time="2026-01-17T00:27:54.059733438Z" level=info msg="Start streaming server" Jan 17 00:27:54.060765 containerd[1590]: time="2026-01-17T00:27:54.060212733Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:27:54.060765 containerd[1590]: time="2026-01-17T00:27:54.060311996Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:27:54.060765 containerd[1590]: time="2026-01-17T00:27:54.060392982Z" level=info msg="containerd successfully booted in 0.236322s" Jan 17 00:27:54.061655 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:27:54.081278 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:27:54.091260 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:27:54.103033 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:27:54.199294 instance-setup[1560]: INFO Running google_set_multiqueue. Jan 17 00:27:54.227963 instance-setup[1560]: INFO Set channels for eth0 to 2. Jan 17 00:27:54.237681 instance-setup[1560]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 17 00:27:54.242036 instance-setup[1560]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 17 00:27:54.242384 instance-setup[1560]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 17 00:27:54.244960 instance-setup[1560]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 17 00:27:54.245208 instance-setup[1560]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 17 00:27:54.249046 instance-setup[1560]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 17 00:27:54.249112 instance-setup[1560]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 17 00:27:54.255367 instance-setup[1560]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 17 00:27:54.265699 instance-setup[1560]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 17 00:27:54.277712 instance-setup[1560]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 17 00:27:54.284911 instance-setup[1560]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 17 00:27:54.284968 instance-setup[1560]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 17 00:27:54.313552 init.sh[1549]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 17 00:27:54.538675 startup-script[1713]: INFO Starting startup scripts. Jan 17 00:27:54.543305 tar[1587]: linux-amd64/README.md Jan 17 00:27:54.548789 startup-script[1713]: INFO No startup scripts found in metadata. Jan 17 00:27:54.548882 startup-script[1713]: INFO Finished running startup scripts. Jan 17 00:27:54.578443 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:27:54.590639 init.sh[1549]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 17 00:27:54.590639 init.sh[1549]: + daemon_pids=() Jan 17 00:27:54.590864 init.sh[1549]: + for d in accounts clock_skew network Jan 17 00:27:54.591346 init.sh[1549]: + daemon_pids+=($!) Jan 17 00:27:54.591346 init.sh[1549]: + for d in accounts clock_skew network Jan 17 00:27:54.591477 init.sh[1721]: + /usr/bin/google_accounts_daemon Jan 17 00:27:54.592413 init.sh[1549]: + daemon_pids+=($!) Jan 17 00:27:54.592413 init.sh[1549]: + for d in accounts clock_skew network Jan 17 00:27:54.592413 init.sh[1549]: + daemon_pids+=($!) Jan 17 00:27:54.592413 init.sh[1549]: + NOTIFY_SOCKET=/run/systemd/notify Jan 17 00:27:54.592413 init.sh[1549]: + /usr/bin/systemd-notify --ready Jan 17 00:27:54.592710 init.sh[1722]: + /usr/bin/google_clock_skew_daemon Jan 17 00:27:54.593389 init.sh[1723]: + /usr/bin/google_network_daemon Jan 17 00:27:54.611466 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 17 00:27:54.625744 init.sh[1549]: + wait -n 1721 1722 1723 Jan 17 00:27:54.917785 google-networking[1723]: INFO Starting Google Networking daemon. Jan 17 00:27:54.965211 google-clock-skew[1722]: INFO Starting Google Clock Skew daemon. Jan 17 00:27:54.971855 google-clock-skew[1722]: INFO Clock drift token has changed: 0. Jan 17 00:27:55.040074 groupadd[1733]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 17 00:27:55.044813 groupadd[1733]: group added to /etc/gshadow: name=google-sudoers Jan 17 00:27:55.101779 groupadd[1733]: new group: name=google-sudoers, GID=1000 Jan 17 00:27:55.132811 google-accounts[1721]: INFO Starting Google Accounts daemon. Jan 17 00:27:55.145469 google-accounts[1721]: WARNING OS Login not installed. Jan 17 00:27:55.146886 google-accounts[1721]: INFO Creating a new user account for 0. Jan 17 00:27:55.151517 init.sh[1741]: useradd: invalid user name '0': use --badname to ignore Jan 17 00:27:55.151863 google-accounts[1721]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 17 00:27:55.406977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:27:55.418821 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:27:55.424417 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:27:55.429367 systemd[1]: Startup finished in 10.682s (kernel) + 10.096s (userspace) = 20.779s. Jan 17 00:27:56.001505 google-clock-skew[1722]: INFO Synced system time with hardware clock. Jan 17 00:27:56.001798 systemd-resolved[1454]: Clock change detected. Flushing caches. Jan 17 00:27:56.395992 kubelet[1751]: E0117 00:27:56.395838 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:27:56.399066 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:27:56.399587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:28:01.029797 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:28:01.036120 systemd[1]: Started sshd@0-10.128.0.62:22-4.153.228.146:36218.service - OpenSSH per-connection server daemon (4.153.228.146:36218). Jan 17 00:28:01.294314 sshd[1763]: Accepted publickey for core from 4.153.228.146 port 36218 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:28:01.297538 sshd[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:01.310031 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:28:01.320175 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:28:01.323854 systemd-logind[1574]: New session 1 of user core. Jan 17 00:28:01.347580 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:28:01.358361 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:28:01.380818 (systemd)[1769]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:28:01.523260 systemd[1769]: Queued start job for default target default.target. Jan 17 00:28:01.524000 systemd[1769]: Created slice app.slice - User Application Slice. Jan 17 00:28:01.524040 systemd[1769]: Reached target paths.target - Paths. Jan 17 00:28:01.524062 systemd[1769]: Reached target timers.target - Timers. Jan 17 00:28:01.532872 systemd[1769]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:28:01.543302 systemd[1769]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:28:01.543419 systemd[1769]: Reached target sockets.target - Sockets. Jan 17 00:28:01.543455 systemd[1769]: Reached target basic.target - Basic System. Jan 17 00:28:01.543530 systemd[1769]: Reached target default.target - Main User Target. Jan 17 00:28:01.543586 systemd[1769]: Startup finished in 152ms. Jan 17 00:28:01.544406 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:28:01.552827 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:28:01.742401 systemd[1]: Started sshd@1-10.128.0.62:22-4.153.228.146:36224.service - OpenSSH per-connection server daemon (4.153.228.146:36224). Jan 17 00:28:01.974539 sshd[1781]: Accepted publickey for core from 4.153.228.146 port 36224 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:28:01.976419 sshd[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:01.983286 systemd-logind[1574]: New session 2 of user core. Jan 17 00:28:01.994778 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:28:02.144454 sshd[1781]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:02.150069 systemd[1]: sshd@1-10.128.0.62:22-4.153.228.146:36224.service: Deactivated successfully. Jan 17 00:28:02.155393 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:28:02.156393 systemd-logind[1574]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:28:02.157937 systemd-logind[1574]: Removed session 2. Jan 17 00:28:02.182082 systemd[1]: Started sshd@2-10.128.0.62:22-4.153.228.146:36232.service - OpenSSH per-connection server daemon (4.153.228.146:36232). Jan 17 00:28:02.411613 sshd[1789]: Accepted publickey for core from 4.153.228.146 port 36232 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:28:02.413630 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:02.420587 systemd-logind[1574]: New session 3 of user core. Jan 17 00:28:02.426197 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:28:02.578160 sshd[1789]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:02.583770 systemd[1]: sshd@2-10.128.0.62:22-4.153.228.146:36232.service: Deactivated successfully. Jan 17 00:28:02.587086 systemd-logind[1574]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:28:02.587936 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:28:02.590379 systemd-logind[1574]: Removed session 3. Jan 17 00:28:02.620197 systemd[1]: Started sshd@3-10.128.0.62:22-4.153.228.146:36248.service - OpenSSH per-connection server daemon (4.153.228.146:36248). Jan 17 00:28:02.831717 sshd[1797]: Accepted publickey for core from 4.153.228.146 port 36248 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:28:02.833806 sshd[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:02.840878 systemd-logind[1574]: New session 4 of user core. Jan 17 00:28:02.846128 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:28:03.000103 sshd[1797]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:03.006054 systemd[1]: sshd@3-10.128.0.62:22-4.153.228.146:36248.service: Deactivated successfully. Jan 17 00:28:03.010802 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:28:03.011801 systemd-logind[1574]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:28:03.013299 systemd-logind[1574]: Removed session 4. Jan 17 00:28:03.047194 systemd[1]: Started sshd@4-10.128.0.62:22-4.153.228.146:36250.service - OpenSSH per-connection server daemon (4.153.228.146:36250). Jan 17 00:28:03.259548 sshd[1805]: Accepted publickey for core from 4.153.228.146 port 36250 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:28:03.261611 sshd[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:03.267678 systemd-logind[1574]: New session 5 of user core. Jan 17 00:28:03.277107 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:28:03.421609 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:28:03.422171 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:28:03.436794 sudo[1809]: pam_unix(sudo:session): session closed for user root Jan 17 00:28:03.468557 sshd[1805]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:03.474523 systemd[1]: sshd@4-10.128.0.62:22-4.153.228.146:36250.service: Deactivated successfully. Jan 17 00:28:03.479339 systemd-logind[1574]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:28:03.479793 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:28:03.482330 systemd-logind[1574]: Removed session 5. Jan 17 00:28:03.507438 systemd[1]: Started sshd@5-10.128.0.62:22-4.153.228.146:36260.service - OpenSSH per-connection server daemon (4.153.228.146:36260). Jan 17 00:28:03.738227 sshd[1814]: Accepted publickey for core from 4.153.228.146 port 36260 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:28:03.740157 sshd[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:03.747593 systemd-logind[1574]: New session 6 of user core. Jan 17 00:28:03.754151 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:28:03.886058 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:28:03.886589 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:28:03.891760 sudo[1819]: pam_unix(sudo:session): session closed for user root Jan 17 00:28:03.906186 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:28:03.906742 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:28:03.927552 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:28:03.929879 auditctl[1822]: No rules Jan 17 00:28:03.930493 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:28:03.930944 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:28:03.948829 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:28:03.980998 augenrules[1841]: No rules Jan 17 00:28:03.983191 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:28:03.986075 sudo[1818]: pam_unix(sudo:session): session closed for user root Jan 17 00:28:04.020502 sshd[1814]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:04.025817 systemd[1]: sshd@5-10.128.0.62:22-4.153.228.146:36260.service: Deactivated successfully. Jan 17 00:28:04.031089 systemd-logind[1574]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:28:04.032012 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:28:04.034000 systemd-logind[1574]: Removed session 6. Jan 17 00:28:04.058494 systemd[1]: Started sshd@6-10.128.0.62:22-4.153.228.146:36272.service - OpenSSH per-connection server daemon (4.153.228.146:36272). Jan 17 00:28:04.292453 sshd[1850]: Accepted publickey for core from 4.153.228.146 port 36272 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:28:04.294425 sshd[1850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:04.301248 systemd-logind[1574]: New session 7 of user core. Jan 17 00:28:04.309161 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:28:04.439335 sudo[1854]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:28:04.439873 sudo[1854]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:28:04.885143 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:28:04.898674 (dockerd)[1870]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:28:05.340669 dockerd[1870]: time="2026-01-17T00:28:05.340543370Z" level=info msg="Starting up" Jan 17 00:28:06.038617 systemd[1]: var-lib-docker-metacopy\x2dcheck3672295180-merged.mount: Deactivated successfully. Jan 17 00:28:06.058065 dockerd[1870]: time="2026-01-17T00:28:06.057971352Z" level=info msg="Loading containers: start." Jan 17 00:28:06.208005 kernel: Initializing XFRM netlink socket Jan 17 00:28:06.315868 systemd-networkd[1221]: docker0: Link UP Jan 17 00:28:06.340657 dockerd[1870]: time="2026-01-17T00:28:06.340602606Z" level=info msg="Loading containers: done." Jan 17 00:28:06.363260 dockerd[1870]: time="2026-01-17T00:28:06.363194980Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:28:06.363922 dockerd[1870]: time="2026-01-17T00:28:06.363330170Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:28:06.363922 dockerd[1870]: time="2026-01-17T00:28:06.363488235Z" level=info msg="Daemon has completed initialization" Jan 17 00:28:06.402228 dockerd[1870]: time="2026-01-17T00:28:06.402173343Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:28:06.402348 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:28:06.404556 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:28:06.412271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:28:06.774279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:06.775846 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:28:06.862762 kubelet[2018]: E0117 00:28:06.862188 2018 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:28:06.868007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:28:06.868320 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:28:07.443241 containerd[1590]: time="2026-01-17T00:28:07.443192206Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:28:08.020245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057583088.mount: Deactivated successfully. Jan 17 00:28:10.252034 containerd[1590]: time="2026-01-17T00:28:10.251957417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:10.253731 containerd[1590]: time="2026-01-17T00:28:10.253632221Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070653" Jan 17 00:28:10.255122 containerd[1590]: time="2026-01-17T00:28:10.255038520Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:10.260733 containerd[1590]: time="2026-01-17T00:28:10.259107212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:10.260733 containerd[1590]: time="2026-01-17T00:28:10.260589330Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.817342859s" Jan 17 00:28:10.260733 containerd[1590]: time="2026-01-17T00:28:10.260640730Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 17 00:28:10.261942 containerd[1590]: time="2026-01-17T00:28:10.261872447Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:28:12.059144 containerd[1590]: time="2026-01-17T00:28:12.059073654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:12.060954 containerd[1590]: time="2026-01-17T00:28:12.060848024Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993360" Jan 17 00:28:12.062199 containerd[1590]: time="2026-01-17T00:28:12.062132892Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:12.066787 containerd[1590]: time="2026-01-17T00:28:12.066694948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:12.068652 containerd[1590]: time="2026-01-17T00:28:12.068494336Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.806577046s" Jan 17 00:28:12.068652 containerd[1590]: time="2026-01-17T00:28:12.068545107Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 17 00:28:12.069515 containerd[1590]: time="2026-01-17T00:28:12.069470909Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:28:13.502022 containerd[1590]: time="2026-01-17T00:28:13.501956711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:13.503764 containerd[1590]: time="2026-01-17T00:28:13.503559288Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405082" Jan 17 00:28:13.505743 containerd[1590]: time="2026-01-17T00:28:13.505296420Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:13.509944 containerd[1590]: time="2026-01-17T00:28:13.509305361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:13.510975 containerd[1590]: time="2026-01-17T00:28:13.510923141Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.441402091s" Jan 17 00:28:13.511102 containerd[1590]: time="2026-01-17T00:28:13.510980220Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 17 00:28:13.511749 containerd[1590]: time="2026-01-17T00:28:13.511660686Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:28:14.799134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822104942.mount: Deactivated successfully. Jan 17 00:28:15.517769 containerd[1590]: time="2026-01-17T00:28:15.517676754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:15.519378 containerd[1590]: time="2026-01-17T00:28:15.519144466Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161905" Jan 17 00:28:15.522263 containerd[1590]: time="2026-01-17T00:28:15.520774228Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:15.524644 containerd[1590]: time="2026-01-17T00:28:15.523614816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:15.524644 containerd[1590]: time="2026-01-17T00:28:15.524472256Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.012741273s" Jan 17 00:28:15.524644 containerd[1590]: time="2026-01-17T00:28:15.524518661Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 17 00:28:15.525599 containerd[1590]: time="2026-01-17T00:28:15.525547828Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:28:15.962601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1918280696.mount: Deactivated successfully. Jan 17 00:28:16.974583 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:28:16.983312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:28:17.306996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:17.308645 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:28:17.347745 containerd[1590]: time="2026-01-17T00:28:17.345862653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:17.349617 containerd[1590]: time="2026-01-17T00:28:17.349541730Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565247" Jan 17 00:28:17.351536 containerd[1590]: time="2026-01-17T00:28:17.351491283Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:17.357809 containerd[1590]: time="2026-01-17T00:28:17.357768831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:17.359357 containerd[1590]: time="2026-01-17T00:28:17.359223467Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.833478193s" Jan 17 00:28:17.359539 containerd[1590]: time="2026-01-17T00:28:17.359513830Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 17 00:28:17.360934 containerd[1590]: time="2026-01-17T00:28:17.360905726Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:28:17.374253 kubelet[2167]: E0117 00:28:17.374121 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:28:17.377621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:28:17.378262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:28:17.827350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount865998073.mount: Deactivated successfully. Jan 17 00:28:17.834402 containerd[1590]: time="2026-01-17T00:28:17.834340583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:17.835549 containerd[1590]: time="2026-01-17T00:28:17.835402070Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Jan 17 00:28:17.838345 containerd[1590]: time="2026-01-17T00:28:17.836546864Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:17.839741 containerd[1590]: time="2026-01-17T00:28:17.839479755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:17.841694 containerd[1590]: time="2026-01-17T00:28:17.840562873Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 479.497111ms" Jan 17 00:28:17.841694 containerd[1590]: time="2026-01-17T00:28:17.840606839Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:28:17.841694 containerd[1590]: time="2026-01-17T00:28:17.841464661Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:28:18.337081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003319603.mount: Deactivated successfully. Jan 17 00:28:20.994674 containerd[1590]: time="2026-01-17T00:28:20.994600328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:20.996514 containerd[1590]: time="2026-01-17T00:28:20.996439316Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682062" Jan 17 00:28:20.999725 containerd[1590]: time="2026-01-17T00:28:20.998515875Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:21.003219 containerd[1590]: time="2026-01-17T00:28:21.003174929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:21.004839 containerd[1590]: time="2026-01-17T00:28:21.004797402Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.163294405s" Jan 17 00:28:21.005015 containerd[1590]: time="2026-01-17T00:28:21.004984802Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 17 00:28:23.644008 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 00:28:24.264828 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:24.273080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:28:24.317724 systemd[1]: Reloading requested from client PID 2266 ('systemctl') (unit session-7.scope)... Jan 17 00:28:24.317749 systemd[1]: Reloading... Jan 17 00:28:24.477773 zram_generator::config[2312]: No configuration found. Jan 17 00:28:24.653129 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:28:24.754439 systemd[1]: Reloading finished in 436 ms. Jan 17 00:28:24.818684 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:28:24.818914 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:28:24.819479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:24.827434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:28:25.132974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:25.147402 (kubelet)[2369]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:28:25.208633 kubelet[2369]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:28:25.210731 kubelet[2369]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:28:25.210731 kubelet[2369]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:28:25.210731 kubelet[2369]: I0117 00:28:25.209378 2369 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:28:25.940579 kubelet[2369]: I0117 00:28:25.938835 2369 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:28:25.940579 kubelet[2369]: I0117 00:28:25.938883 2369 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:28:25.940579 kubelet[2369]: I0117 00:28:25.939738 2369 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:28:25.981145 kubelet[2369]: E0117 00:28:25.981093 2369 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.62:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:28:25.987234 kubelet[2369]: I0117 00:28:25.987183 2369 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:28:25.996752 kubelet[2369]: E0117 00:28:25.995385 2369 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:28:25.996752 kubelet[2369]: I0117 00:28:25.995428 2369 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:28:25.999307 kubelet[2369]: I0117 00:28:25.999275 2369 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:28:26.002288 kubelet[2369]: I0117 00:28:26.002195 2369 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:28:26.002536 kubelet[2369]: I0117 00:28:26.002264 2369 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:28:26.002825 kubelet[2369]: I0117 00:28:26.002554 2369 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:28:26.002825 kubelet[2369]: I0117 00:28:26.002575 2369 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:28:26.002825 kubelet[2369]: I0117 00:28:26.002812 2369 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:28:26.009221 kubelet[2369]: I0117 00:28:26.009164 2369 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:28:26.009221 kubelet[2369]: I0117 00:28:26.009225 2369 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:28:26.009427 kubelet[2369]: I0117 00:28:26.009259 2369 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:28:26.009427 kubelet[2369]: I0117 00:28:26.009276 2369 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:28:26.016449 kubelet[2369]: I0117 00:28:26.015589 2369 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:28:26.016449 kubelet[2369]: I0117 00:28:26.016364 2369 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:28:26.018776 kubelet[2369]: W0117 00:28:26.017597 2369 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:28:26.021726 kubelet[2369]: I0117 00:28:26.020431 2369 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:28:26.021726 kubelet[2369]: I0117 00:28:26.020481 2369 server.go:1287] "Started kubelet" Jan 17 00:28:26.021726 kubelet[2369]: W0117 00:28:26.020690 2369 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.62:6443: connect: connection refused Jan 17 00:28:26.021726 kubelet[2369]: E0117 00:28:26.020797 2369 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.62:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:28:26.021726 kubelet[2369]: W0117 00:28:26.020923 2369 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9&limit=500&resourceVersion=0": dial tcp 10.128.0.62:6443: connect: connection refused Jan 17 00:28:26.021726 kubelet[2369]: E0117 00:28:26.020973 2369 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9&limit=500&resourceVersion=0\": dial tcp 10.128.0.62:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:28:26.033432 kubelet[2369]: E0117 00:28:26.029967 2369 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.62:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.62:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9.188b5d2839a83fc6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9,UID:ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9,},FirstTimestamp:2026-01-17 00:28:26.020454342 +0000 UTC m=+0.867688639,LastTimestamp:2026-01-17 00:28:26.020454342 +0000 UTC m=+0.867688639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9,}" Jan 17 00:28:26.036168 kubelet[2369]: I0117 00:28:26.035404 2369 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:28:26.036168 kubelet[2369]: I0117 00:28:26.035890 2369 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:28:26.039443 kubelet[2369]: I0117 00:28:26.039388 2369 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:28:26.043251 kubelet[2369]: I0117 00:28:26.041236 2369 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:28:26.044397 kubelet[2369]: I0117 00:28:26.044372 2369 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:28:26.048117 kubelet[2369]: I0117 00:28:26.048090 2369 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:28:26.048389 kubelet[2369]: E0117 00:28:26.048360 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" not found" Jan 17 00:28:26.048567 kubelet[2369]: I0117 00:28:26.048536 2369 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:28:26.052784 kubelet[2369]: I0117 00:28:26.052759 2369 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:28:26.053130 kubelet[2369]: I0117 00:28:26.053110 2369 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:28:26.053996 kubelet[2369]: W0117 00:28:26.053924 2369 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.62:6443: connect: connection refused Jan 17 00:28:26.054177 kubelet[2369]: E0117 00:28:26.054153 2369 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.62:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:28:26.054407 kubelet[2369]: E0117 00:28:26.054373 2369 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9?timeout=10s\": dial tcp 10.128.0.62:6443: connect: connection refused" interval="200ms" Jan 17 00:28:26.056176 kubelet[2369]: I0117 00:28:26.056148 2369 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:28:26.056411 kubelet[2369]: I0117 00:28:26.056385 2369 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:28:26.059372 kubelet[2369]: E0117 00:28:26.059345 2369 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:28:26.059799 kubelet[2369]: I0117 00:28:26.059780 2369 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:28:26.076257 kubelet[2369]: I0117 00:28:26.076182 2369 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:28:26.078053 kubelet[2369]: I0117 00:28:26.077995 2369 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:28:26.078053 kubelet[2369]: I0117 00:28:26.078028 2369 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:28:26.078233 kubelet[2369]: I0117 00:28:26.078061 2369 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:28:26.078233 kubelet[2369]: I0117 00:28:26.078075 2369 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:28:26.078233 kubelet[2369]: E0117 00:28:26.078147 2369 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:28:26.096342 kubelet[2369]: W0117 00:28:26.096184 2369 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.62:6443: connect: connection refused Jan 17 00:28:26.096342 kubelet[2369]: E0117 00:28:26.096272 2369 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.62:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:28:26.109086 kubelet[2369]: I0117 00:28:26.109053 2369 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:28:26.109563 kubelet[2369]: I0117 00:28:26.109288 2369 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:28:26.109563 kubelet[2369]: I0117 00:28:26.109320 2369 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:28:26.111759 kubelet[2369]: I0117 00:28:26.111633 2369 policy_none.go:49] "None policy: Start" Jan 17 00:28:26.111759 kubelet[2369]: I0117 00:28:26.111666 2369 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:28:26.111759 kubelet[2369]: I0117 00:28:26.111682 2369 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:28:26.118511 kubelet[2369]: I0117 00:28:26.118459 2369 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:28:26.118778 kubelet[2369]: I0117 00:28:26.118747 2369 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:28:26.118882 kubelet[2369]: I0117 00:28:26.118769 2369 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:28:26.120553 kubelet[2369]: I0117 00:28:26.120506 2369 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:28:26.125976 kubelet[2369]: E0117 00:28:26.125922 2369 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:28:26.126134 kubelet[2369]: E0117 00:28:26.126000 2369 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" not found" Jan 17 00:28:26.193420 kubelet[2369]: E0117 00:28:26.193279 2369 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" not found" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.201282 kubelet[2369]: E0117 00:28:26.201223 2369 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" not found" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.205904 kubelet[2369]: E0117 00:28:26.205854 2369 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" not found" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.224794 kubelet[2369]: I0117 00:28:26.224735 2369 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.225653 kubelet[2369]: E0117 00:28:26.225210 2369 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.62:6443/api/v1/nodes\": dial tcp 10.128.0.62:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.255181 kubelet[2369]: I0117 00:28:26.254388 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84950f8de6b79ce02d5de4fdd62bec08-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"84950f8de6b79ce02d5de4fdd62bec08\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.255181 kubelet[2369]: I0117 00:28:26.254462 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84950f8de6b79ce02d5de4fdd62bec08-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"84950f8de6b79ce02d5de4fdd62bec08\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.255181 kubelet[2369]: I0117 00:28:26.254523 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a48f984adeb07181ec834ca7c575521f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"a48f984adeb07181ec834ca7c575521f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.255181 kubelet[2369]: I0117 00:28:26.254565 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84950f8de6b79ce02d5de4fdd62bec08-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"84950f8de6b79ce02d5de4fdd62bec08\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.255485 kubelet[2369]: I0117 00:28:26.254595 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84950f8de6b79ce02d5de4fdd62bec08-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"84950f8de6b79ce02d5de4fdd62bec08\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.255485 kubelet[2369]: I0117 00:28:26.254623 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84950f8de6b79ce02d5de4fdd62bec08-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"84950f8de6b79ce02d5de4fdd62bec08\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.255485 kubelet[2369]: I0117 00:28:26.254650 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bec645ab57ac9704d236a06db780e060-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"bec645ab57ac9704d236a06db780e060\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.255485 kubelet[2369]: I0117 00:28:26.254676 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a48f984adeb07181ec834ca7c575521f-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"a48f984adeb07181ec834ca7c575521f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.255605 kubelet[2369]: I0117 00:28:26.254725 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a48f984adeb07181ec834ca7c575521f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"a48f984adeb07181ec834ca7c575521f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.255605 kubelet[2369]: E0117 00:28:26.254936 2369 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9?timeout=10s\": dial tcp 10.128.0.62:6443: connect: connection refused" interval="400ms" Jan 17 00:28:26.430801 kubelet[2369]: I0117 00:28:26.430750 2369 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.431246 kubelet[2369]: E0117 00:28:26.431191 2369 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.62:6443/api/v1/nodes\": dial tcp 10.128.0.62:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.495619 containerd[1590]: time="2026-01-17T00:28:26.495552690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9,Uid:a48f984adeb07181ec834ca7c575521f,Namespace:kube-system,Attempt:0,}" Jan 17 00:28:26.502365 containerd[1590]: time="2026-01-17T00:28:26.502304527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9,Uid:84950f8de6b79ce02d5de4fdd62bec08,Namespace:kube-system,Attempt:0,}" Jan 17 00:28:26.507446 containerd[1590]: time="2026-01-17T00:28:26.507104565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9,Uid:bec645ab57ac9704d236a06db780e060,Namespace:kube-system,Attempt:0,}" Jan 17 00:28:26.656502 kubelet[2369]: E0117 00:28:26.656430 2369 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9?timeout=10s\": dial tcp 10.128.0.62:6443: connect: connection refused" interval="800ms" Jan 17 00:28:26.836541 kubelet[2369]: I0117 00:28:26.836217 2369 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.837122 kubelet[2369]: E0117 00:28:26.836846 2369 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.62:6443/api/v1/nodes\": dial tcp 10.128.0.62:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:26.986609 kubelet[2369]: W0117 00:28:26.986257 2369 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.62:6443: connect: connection refused Jan 17 00:28:26.986609 kubelet[2369]: E0117 00:28:26.986357 2369 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.62:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:28:26.996380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331380613.mount: Deactivated successfully. Jan 17 00:28:27.005720 containerd[1590]: time="2026-01-17T00:28:27.005647768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:28:27.007048 containerd[1590]: time="2026-01-17T00:28:27.006982579Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Jan 17 00:28:27.008962 containerd[1590]: time="2026-01-17T00:28:27.008869593Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:28:27.012974 containerd[1590]: time="2026-01-17T00:28:27.012887802Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:28:27.015681 containerd[1590]: time="2026-01-17T00:28:27.015624964Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:28:27.019143 containerd[1590]: time="2026-01-17T00:28:27.018098795Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:28:27.019143 containerd[1590]: time="2026-01-17T00:28:27.019051345Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:28:27.020171 containerd[1590]: time="2026-01-17T00:28:27.020133779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:28:27.025855 containerd[1590]: time="2026-01-17T00:28:27.025810841Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 530.140275ms" Jan 17 00:28:27.027560 containerd[1590]: time="2026-01-17T00:28:27.027503545Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 525.107115ms" Jan 17 00:28:27.029139 containerd[1590]: time="2026-01-17T00:28:27.029090285Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 521.893671ms" Jan 17 00:28:27.256284 containerd[1590]: time="2026-01-17T00:28:27.255845523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:27.256284 containerd[1590]: time="2026-01-17T00:28:27.255915236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:27.256284 containerd[1590]: time="2026-01-17T00:28:27.255940450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:27.256284 containerd[1590]: time="2026-01-17T00:28:27.256073133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:27.259165 containerd[1590]: time="2026-01-17T00:28:27.258513293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:27.259165 containerd[1590]: time="2026-01-17T00:28:27.258646306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:27.259165 containerd[1590]: time="2026-01-17T00:28:27.258674136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:27.259165 containerd[1590]: time="2026-01-17T00:28:27.258856202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:27.271836 containerd[1590]: time="2026-01-17T00:28:27.269582380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:27.271836 containerd[1590]: time="2026-01-17T00:28:27.269654972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:27.271836 containerd[1590]: time="2026-01-17T00:28:27.269673883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:27.271836 containerd[1590]: time="2026-01-17T00:28:27.270027798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:27.425985 containerd[1590]: time="2026-01-17T00:28:27.425665411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9,Uid:84950f8de6b79ce02d5de4fdd62bec08,Namespace:kube-system,Attempt:0,} returns sandbox id \"bff232d259c91ef496121656180119001cf14a5c56aa3ce6a6ee103552ac2218\"" Jan 17 00:28:27.425985 containerd[1590]: time="2026-01-17T00:28:27.425893724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9,Uid:a48f984adeb07181ec834ca7c575521f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4da72271eed9d40c378ec653156ab262041ed0c5c8eec30504263eba003af1e2\"" Jan 17 00:28:27.428978 containerd[1590]: time="2026-01-17T00:28:27.428930341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9,Uid:bec645ab57ac9704d236a06db780e060,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cccfca76a4e5cdfaa4cbdd20f6a324c2898dd3d16b367e9c0766658b4ab9562\"" Jan 17 00:28:27.430572 kubelet[2369]: E0117 00:28:27.430452 2369 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7" Jan 17 00:28:27.434582 kubelet[2369]: E0117 00:28:27.433638 2369 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8" Jan 17 00:28:27.434582 kubelet[2369]: E0117 00:28:27.434480 2369 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7" Jan 17 00:28:27.435598 containerd[1590]: time="2026-01-17T00:28:27.435555157Z" level=info msg="CreateContainer within sandbox \"4da72271eed9d40c378ec653156ab262041ed0c5c8eec30504263eba003af1e2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:28:27.436175 containerd[1590]: time="2026-01-17T00:28:27.436133213Z" level=info msg="CreateContainer within sandbox \"bff232d259c91ef496121656180119001cf14a5c56aa3ce6a6ee103552ac2218\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:28:27.438605 containerd[1590]: time="2026-01-17T00:28:27.438572053Z" level=info msg="CreateContainer within sandbox \"8cccfca76a4e5cdfaa4cbdd20f6a324c2898dd3d16b367e9c0766658b4ab9562\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:28:27.458219 kubelet[2369]: E0117 00:28:27.458165 2369 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9?timeout=10s\": dial tcp 10.128.0.62:6443: connect: connection refused" interval="1.6s" Jan 17 00:28:27.461186 containerd[1590]: time="2026-01-17T00:28:27.461125722Z" level=info msg="CreateContainer within sandbox \"4da72271eed9d40c378ec653156ab262041ed0c5c8eec30504263eba003af1e2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6177836e3ac6921aa5cfe6a46c0c0892a1189316623b69a3e8c32f472a257352\"" Jan 17 00:28:27.462137 containerd[1590]: time="2026-01-17T00:28:27.462100627Z" level=info msg="StartContainer for \"6177836e3ac6921aa5cfe6a46c0c0892a1189316623b69a3e8c32f472a257352\"" Jan 17 00:28:27.464526 containerd[1590]: time="2026-01-17T00:28:27.464458256Z" level=info msg="CreateContainer within sandbox \"bff232d259c91ef496121656180119001cf14a5c56aa3ce6a6ee103552ac2218\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"da05d504bdf4f081ebe8608ff24bc066c28e1a41162d71df758964e42dd4d6f6\"" Jan 17 00:28:27.465802 containerd[1590]: time="2026-01-17T00:28:27.465584088Z" level=info msg="StartContainer for \"da05d504bdf4f081ebe8608ff24bc066c28e1a41162d71df758964e42dd4d6f6\"" Jan 17 00:28:27.470264 containerd[1590]: time="2026-01-17T00:28:27.470221328Z" level=info msg="CreateContainer within sandbox \"8cccfca76a4e5cdfaa4cbdd20f6a324c2898dd3d16b367e9c0766658b4ab9562\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9636a25b195a8e34aec5667e18608a8fb85879aa9e5f31508319d3fa057c8994\"" Jan 17 00:28:27.471515 containerd[1590]: time="2026-01-17T00:28:27.471475773Z" level=info msg="StartContainer for \"9636a25b195a8e34aec5667e18608a8fb85879aa9e5f31508319d3fa057c8994\"" Jan 17 00:28:27.532830 kubelet[2369]: W0117 00:28:27.530742 2369 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.62:6443: connect: connection refused Jan 17 00:28:27.532830 kubelet[2369]: E0117 00:28:27.530843 2369 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.62:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:28:27.575181 kubelet[2369]: W0117 00:28:27.575100 2369 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9&limit=500&resourceVersion=0": dial tcp 10.128.0.62:6443: connect: connection refused Jan 17 00:28:27.575373 kubelet[2369]: E0117 00:28:27.575199 2369 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9&limit=500&resourceVersion=0\": dial tcp 10.128.0.62:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:28:27.638629 kubelet[2369]: W0117 00:28:27.638536 2369 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.62:6443: connect: connection refused Jan 17 00:28:27.639824 kubelet[2369]: E0117 00:28:27.638642 2369 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.62:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:28:27.642362 kubelet[2369]: I0117 00:28:27.642215 2369 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:27.643858 kubelet[2369]: E0117 00:28:27.643086 2369 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.62:6443/api/v1/nodes\": dial tcp 10.128.0.62:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:27.659480 containerd[1590]: time="2026-01-17T00:28:27.658973355Z" level=info msg="StartContainer for \"6177836e3ac6921aa5cfe6a46c0c0892a1189316623b69a3e8c32f472a257352\" returns successfully" Jan 17 00:28:27.688517 containerd[1590]: time="2026-01-17T00:28:27.688430098Z" level=info msg="StartContainer for \"9636a25b195a8e34aec5667e18608a8fb85879aa9e5f31508319d3fa057c8994\" returns successfully" Jan 17 00:28:27.688688 containerd[1590]: time="2026-01-17T00:28:27.688430155Z" level=info msg="StartContainer for \"da05d504bdf4f081ebe8608ff24bc066c28e1a41162d71df758964e42dd4d6f6\" returns successfully" Jan 17 00:28:28.117739 kubelet[2369]: E0117 00:28:28.115365 2369 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" not found" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:28.124689 kubelet[2369]: E0117 00:28:28.124631 2369 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" not found" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:28.135428 kubelet[2369]: E0117 00:28:28.135383 2369 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" not found" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:29.137477 kubelet[2369]: E0117 00:28:29.137019 2369 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" not found" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:29.139266 kubelet[2369]: E0117 00:28:29.139219 2369 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" not found" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:29.250737 kubelet[2369]: I0117 00:28:29.250419 2369 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:31.181485 kubelet[2369]: I0117 00:28:31.181417 2369 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:31.250065 kubelet[2369]: I0117 00:28:31.248786 2369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:31.272734 kubelet[2369]: E0117 00:28:31.270520 2369 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9.188b5d2839a83fc6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9,UID:ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9,},FirstTimestamp:2026-01-17 00:28:26.020454342 +0000 UTC m=+0.867688639,LastTimestamp:2026-01-17 00:28:26.020454342 +0000 UTC m=+0.867688639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9,}" Jan 17 00:28:31.285789 kubelet[2369]: E0117 00:28:31.284316 2369 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 17 00:28:31.301733 kubelet[2369]: E0117 00:28:31.299687 2369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:31.301733 kubelet[2369]: I0117 00:28:31.299757 2369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:31.306802 kubelet[2369]: E0117 00:28:31.306668 2369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:31.306802 kubelet[2369]: I0117 00:28:31.306736 2369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:31.318022 kubelet[2369]: E0117 00:28:31.317917 2369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:31.767514 kubelet[2369]: I0117 00:28:31.767451 2369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:31.770271 kubelet[2369]: E0117 00:28:31.770205 2369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:32.022735 kubelet[2369]: I0117 00:28:32.022577 2369 apiserver.go:52] "Watching apiserver" Jan 17 00:28:32.049009 kubelet[2369]: I0117 00:28:32.048964 2369 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:28:33.139484 systemd[1]: Reloading requested from client PID 2646 ('systemctl') (unit session-7.scope)... Jan 17 00:28:33.139508 systemd[1]: Reloading... Jan 17 00:28:33.293730 zram_generator::config[2692]: No configuration found. Jan 17 00:28:33.442869 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:28:33.596692 systemd[1]: Reloading finished in 456 ms. Jan 17 00:28:33.653345 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:28:33.671531 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:28:33.672031 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:33.683178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:28:33.981952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:33.997338 (kubelet)[2744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:28:34.082959 kubelet[2744]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:28:34.082959 kubelet[2744]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:28:34.082959 kubelet[2744]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:28:34.083597 kubelet[2744]: I0117 00:28:34.083062 2744 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:28:34.095776 kubelet[2744]: I0117 00:28:34.095734 2744 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:28:34.096739 kubelet[2744]: I0117 00:28:34.095931 2744 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:28:34.096739 kubelet[2744]: I0117 00:28:34.096228 2744 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:28:34.098505 kubelet[2744]: I0117 00:28:34.098449 2744 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:28:34.102635 kubelet[2744]: I0117 00:28:34.102579 2744 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:28:34.108417 kubelet[2744]: E0117 00:28:34.108331 2744 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:28:34.108417 kubelet[2744]: I0117 00:28:34.108380 2744 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:28:34.113027 kubelet[2744]: I0117 00:28:34.112996 2744 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:28:34.114356 kubelet[2744]: I0117 00:28:34.114307 2744 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:28:34.114899 kubelet[2744]: I0117 00:28:34.114357 2744 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:28:34.114899 kubelet[2744]: I0117 00:28:34.114646 2744 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:28:34.114899 kubelet[2744]: I0117 00:28:34.114666 2744 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:28:34.114899 kubelet[2744]: I0117 00:28:34.114765 2744 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:28:34.115229 kubelet[2744]: I0117 00:28:34.115014 2744 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:28:34.115229 kubelet[2744]: I0117 00:28:34.115044 2744 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:28:34.115340 kubelet[2744]: I0117 00:28:34.115295 2744 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:28:34.115340 kubelet[2744]: I0117 00:28:34.115314 2744 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:28:34.119903 kubelet[2744]: I0117 00:28:34.119877 2744 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:28:34.120578 kubelet[2744]: I0117 00:28:34.120546 2744 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:28:34.127750 kubelet[2744]: I0117 00:28:34.125937 2744 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:28:34.127750 kubelet[2744]: I0117 00:28:34.125985 2744 server.go:1287] "Started kubelet" Jan 17 00:28:34.138904 kubelet[2744]: I0117 00:28:34.138661 2744 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:28:34.158000 kubelet[2744]: I0117 00:28:34.157210 2744 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:28:34.161657 kubelet[2744]: I0117 00:28:34.161624 2744 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:28:34.170216 kubelet[2744]: I0117 00:28:34.156366 2744 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:28:34.172453 sudo[2760]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 00:28:34.173059 sudo[2760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 00:28:34.175082 kubelet[2744]: I0117 00:28:34.175051 2744 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:28:34.176561 kubelet[2744]: E0117 00:28:34.175672 2744 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" not found" Jan 17 00:28:34.182893 kubelet[2744]: I0117 00:28:34.178108 2744 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:28:34.182893 kubelet[2744]: I0117 00:28:34.180326 2744 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:28:34.183574 kubelet[2744]: I0117 00:28:34.183302 2744 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:28:34.187251 kubelet[2744]: I0117 00:28:34.187188 2744 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:28:34.210735 kubelet[2744]: I0117 00:28:34.210566 2744 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:28:34.217738 kubelet[2744]: I0117 00:28:34.216249 2744 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:28:34.217738 kubelet[2744]: I0117 00:28:34.216300 2744 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:28:34.217738 kubelet[2744]: I0117 00:28:34.216329 2744 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:28:34.217738 kubelet[2744]: I0117 00:28:34.216341 2744 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:28:34.217738 kubelet[2744]: E0117 00:28:34.216407 2744 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:28:34.261697 kubelet[2744]: I0117 00:28:34.261575 2744 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:28:34.261697 kubelet[2744]: I0117 00:28:34.261607 2744 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:28:34.261931 kubelet[2744]: I0117 00:28:34.261758 2744 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:28:34.297817 kubelet[2744]: E0117 00:28:34.296767 2744 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:28:34.316755 kubelet[2744]: E0117 00:28:34.316541 2744 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:28:34.413504 kubelet[2744]: I0117 00:28:34.413477 2744 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:28:34.413796 kubelet[2744]: I0117 00:28:34.413776 2744 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:28:34.413969 kubelet[2744]: I0117 00:28:34.413956 2744 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:28:34.414600 kubelet[2744]: I0117 00:28:34.414468 2744 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:28:34.414600 kubelet[2744]: I0117 00:28:34.414521 2744 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:28:34.414600 kubelet[2744]: I0117 00:28:34.414564 2744 policy_none.go:49] "None policy: Start" Jan 17 00:28:34.414897 kubelet[2744]: I0117 00:28:34.414690 2744 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:28:34.415265 kubelet[2744]: I0117 00:28:34.414966 2744 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:28:34.415265 kubelet[2744]: I0117 00:28:34.415178 2744 state_mem.go:75] "Updated machine memory state" Jan 17 00:28:34.418721 kubelet[2744]: I0117 00:28:34.418308 2744 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:28:34.418721 kubelet[2744]: I0117 00:28:34.418544 2744 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:28:34.418721 kubelet[2744]: I0117 00:28:34.418561 2744 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:28:34.421980 kubelet[2744]: I0117 00:28:34.421450 2744 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:28:34.425123 kubelet[2744]: E0117 00:28:34.425095 2744 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:28:34.517935 kubelet[2744]: I0117 00:28:34.517792 2744 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:34.520531 kubelet[2744]: I0117 00:28:34.518459 2744 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:34.522761 kubelet[2744]: I0117 00:28:34.518674 2744 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:34.530033 kubelet[2744]: W0117 00:28:34.529777 2744 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jan 17 00:28:34.533876 kubelet[2744]: W0117 00:28:34.533774 2744 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jan 17 00:28:34.534489 kubelet[2744]: W0117 00:28:34.534160 2744 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jan 17 00:28:34.542185 kubelet[2744]: I0117 00:28:34.542037 2744 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:34.559836 kubelet[2744]: I0117 00:28:34.559795 2744 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:34.560117 kubelet[2744]: I0117 00:28:34.559899 2744 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:34.593279 kubelet[2744]: I0117 00:28:34.593221 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84950f8de6b79ce02d5de4fdd62bec08-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"84950f8de6b79ce02d5de4fdd62bec08\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:34.593279 kubelet[2744]: I0117 00:28:34.593275 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bec645ab57ac9704d236a06db780e060-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"bec645ab57ac9704d236a06db780e060\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:34.593523 kubelet[2744]: I0117 00:28:34.593304 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a48f984adeb07181ec834ca7c575521f-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"a48f984adeb07181ec834ca7c575521f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:34.593523 kubelet[2744]: I0117 00:28:34.593335 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a48f984adeb07181ec834ca7c575521f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"a48f984adeb07181ec834ca7c575521f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:34.593523 kubelet[2744]: I0117 00:28:34.593361 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84950f8de6b79ce02d5de4fdd62bec08-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"84950f8de6b79ce02d5de4fdd62bec08\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:34.593523 kubelet[2744]: I0117 00:28:34.593388 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84950f8de6b79ce02d5de4fdd62bec08-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"84950f8de6b79ce02d5de4fdd62bec08\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:34.593823 kubelet[2744]: I0117 00:28:34.593414 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a48f984adeb07181ec834ca7c575521f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"a48f984adeb07181ec834ca7c575521f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:34.593823 kubelet[2744]: I0117 00:28:34.593441 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84950f8de6b79ce02d5de4fdd62bec08-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"84950f8de6b79ce02d5de4fdd62bec08\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:34.593823 kubelet[2744]: I0117 00:28:34.593471 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84950f8de6b79ce02d5de4fdd62bec08-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" (UID: \"84950f8de6b79ce02d5de4fdd62bec08\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" Jan 17 00:28:35.029740 sudo[2760]: pam_unix(sudo:session): session closed for user root Jan 17 00:28:35.136834 kubelet[2744]: I0117 00:28:35.135227 2744 apiserver.go:52] "Watching apiserver" Jan 17 00:28:35.183614 kubelet[2744]: I0117 00:28:35.183532 2744 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:28:35.451217 kubelet[2744]: I0117 00:28:35.450317 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" podStartSLOduration=1.450245139 podStartE2EDuration="1.450245139s" podCreationTimestamp="2026-01-17 00:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:28:35.449982681 +0000 UTC m=+1.442730386" watchObservedRunningTime="2026-01-17 00:28:35.450245139 +0000 UTC m=+1.442992859" Jan 17 00:28:35.508422 kubelet[2744]: I0117 00:28:35.508340 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" podStartSLOduration=1.5083153280000001 podStartE2EDuration="1.508315328s" podCreationTimestamp="2026-01-17 00:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:28:35.477508261 +0000 UTC m=+1.470255958" watchObservedRunningTime="2026-01-17 00:28:35.508315328 +0000 UTC m=+1.501063034" Jan 17 00:28:35.536330 kubelet[2744]: I0117 00:28:35.536255 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" podStartSLOduration=1.536227367 podStartE2EDuration="1.536227367s" podCreationTimestamp="2026-01-17 00:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:28:35.514934391 +0000 UTC m=+1.507682097" watchObservedRunningTime="2026-01-17 00:28:35.536227367 +0000 UTC m=+1.528975073" Jan 17 00:28:37.687829 sudo[1854]: pam_unix(sudo:session): session closed for user root Jan 17 00:28:37.721145 sshd[1850]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:37.728268 systemd[1]: sshd@6-10.128.0.62:22-4.153.228.146:36272.service: Deactivated successfully. Jan 17 00:28:37.732385 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:28:37.732492 systemd-logind[1574]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:28:37.737445 systemd-logind[1574]: Removed session 7. Jan 17 00:28:37.746319 update_engine[1576]: I20260117 00:28:37.746227 1576 update_attempter.cc:509] Updating boot flags... Jan 17 00:28:37.819756 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2823) Jan 17 00:28:37.936870 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2826) Jan 17 00:28:38.177466 kubelet[2744]: I0117 00:28:38.177399 2744 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:28:38.178218 containerd[1590]: time="2026-01-17T00:28:38.177931908Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:28:38.178782 kubelet[2744]: I0117 00:28:38.178239 2744 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:28:39.026010 kubelet[2744]: I0117 00:28:39.025957 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa31d5bd-dbfa-4f69-bf0a-01ea8443dfb0-lib-modules\") pod \"kube-proxy-cg9v6\" (UID: \"fa31d5bd-dbfa-4f69-bf0a-01ea8443dfb0\") " pod="kube-system/kube-proxy-cg9v6" Jan 17 00:28:39.026415 kubelet[2744]: I0117 00:28:39.026021 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh6q2\" (UniqueName: \"kubernetes.io/projected/1e94b669-209f-45fa-8242-66f711f5454d-kube-api-access-lh6q2\") pod \"cilium-wjv9s\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " pod="kube-system/cilium-wjv9s" Jan 17 00:28:39.026415 kubelet[2744]: I0117 00:28:39.026052 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-cilium-run\") pod \"cilium-wjv9s\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " pod="kube-system/cilium-wjv9s" Jan 17 00:28:39.026415 kubelet[2744]: I0117 00:28:39.026081 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-bpf-maps\") pod \"cilium-wjv9s\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " pod="kube-system/cilium-wjv9s" Jan 17 00:28:39.026415 kubelet[2744]: I0117 00:28:39.026103 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-etc-cni-netd\") pod \"cilium-wjv9s\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " pod="kube-system/cilium-wjv9s" Jan 17 00:28:39.026415 kubelet[2744]: I0117 00:28:39.026131 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-host-proc-sys-kernel\") pod \"cilium-wjv9s\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " pod="kube-system/cilium-wjv9s" Jan 17 00:28:39.026415 kubelet[2744]: I0117 00:28:39.026162 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa31d5bd-dbfa-4f69-bf0a-01ea8443dfb0-xtables-lock\") pod \"kube-proxy-cg9v6\" (UID: \"fa31d5bd-dbfa-4f69-bf0a-01ea8443dfb0\") " pod="kube-system/kube-proxy-cg9v6" Jan 17 00:28:39.027545 kubelet[2744]: I0117 00:28:39.026187 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-xtables-lock\") pod \"cilium-wjv9s\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " pod="kube-system/cilium-wjv9s" Jan 17 00:28:39.027545 kubelet[2744]: I0117 00:28:39.026214 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e94b669-209f-45fa-8242-66f711f5454d-hubble-tls\") pod \"cilium-wjv9s\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " pod="kube-system/cilium-wjv9s" Jan 17 00:28:39.027545 kubelet[2744]: I0117 00:28:39.026247 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e94b669-209f-45fa-8242-66f711f5454d-cilium-config-path\") pod \"cilium-wjv9s\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " pod="kube-system/cilium-wjv9s" Jan 17 00:28:39.027545 kubelet[2744]: I0117 00:28:39.026277 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fa31d5bd-dbfa-4f69-bf0a-01ea8443dfb0-kube-proxy\") pod \"kube-proxy-cg9v6\" (UID: \"fa31d5bd-dbfa-4f69-bf0a-01ea8443dfb0\") " pod="kube-system/kube-proxy-cg9v6" Jan 17 00:28:39.027545 kubelet[2744]: I0117 00:28:39.026303 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-cni-path\") pod \"cilium-wjv9s\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " pod="kube-system/cilium-wjv9s" Jan 17 00:28:39.027545 kubelet[2744]: I0117 00:28:39.027522 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-lib-modules\") pod \"cilium-wjv9s\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " pod="kube-system/cilium-wjv9s" Jan 17 00:28:39.027898 kubelet[2744]: I0117 00:28:39.027850 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-host-proc-sys-net\") pod \"cilium-wjv9s\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " pod="kube-system/cilium-wjv9s" Jan 17 00:28:39.029743 kubelet[2744]: I0117 00:28:39.029679 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp6hg\" (UniqueName: \"kubernetes.io/projected/fa31d5bd-dbfa-4f69-bf0a-01ea8443dfb0-kube-api-access-gp6hg\") pod \"kube-proxy-cg9v6\" (UID: \"fa31d5bd-dbfa-4f69-bf0a-01ea8443dfb0\") " pod="kube-system/kube-proxy-cg9v6" Jan 17 00:28:39.030623 kubelet[2744]: I0117 00:28:39.030567 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-cilium-cgroup\") pod \"cilium-wjv9s\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " pod="kube-system/cilium-wjv9s" Jan 17 00:28:39.030766 kubelet[2744]: I0117 00:28:39.030738 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-hostproc\") pod \"cilium-wjv9s\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " pod="kube-system/cilium-wjv9s" Jan 17 00:28:39.030955 kubelet[2744]: I0117 00:28:39.030912 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e94b669-209f-45fa-8242-66f711f5454d-clustermesh-secrets\") pod \"cilium-wjv9s\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " pod="kube-system/cilium-wjv9s" Jan 17 00:28:39.306189 containerd[1590]: time="2026-01-17T00:28:39.305461322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cg9v6,Uid:fa31d5bd-dbfa-4f69-bf0a-01ea8443dfb0,Namespace:kube-system,Attempt:0,}" Jan 17 00:28:39.313613 containerd[1590]: time="2026-01-17T00:28:39.313540345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wjv9s,Uid:1e94b669-209f-45fa-8242-66f711f5454d,Namespace:kube-system,Attempt:0,}" Jan 17 00:28:39.337727 kubelet[2744]: I0117 00:28:39.333901 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24b9047a-1695-449e-8eeb-5f48dc4d82ce-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-kfbkj\" (UID: \"24b9047a-1695-449e-8eeb-5f48dc4d82ce\") " pod="kube-system/cilium-operator-6c4d7847fc-kfbkj" Jan 17 00:28:39.337727 kubelet[2744]: I0117 00:28:39.333965 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fssz6\" (UniqueName: \"kubernetes.io/projected/24b9047a-1695-449e-8eeb-5f48dc4d82ce-kube-api-access-fssz6\") pod \"cilium-operator-6c4d7847fc-kfbkj\" (UID: \"24b9047a-1695-449e-8eeb-5f48dc4d82ce\") " pod="kube-system/cilium-operator-6c4d7847fc-kfbkj" Jan 17 00:28:39.362748 containerd[1590]: time="2026-01-17T00:28:39.360275326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:39.362748 containerd[1590]: time="2026-01-17T00:28:39.360370170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:39.362748 containerd[1590]: time="2026-01-17T00:28:39.360401372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:39.362748 containerd[1590]: time="2026-01-17T00:28:39.360576337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:39.383009 containerd[1590]: time="2026-01-17T00:28:39.382895741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:39.383305 containerd[1590]: time="2026-01-17T00:28:39.382992727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:39.383305 containerd[1590]: time="2026-01-17T00:28:39.383276324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:39.383819 containerd[1590]: time="2026-01-17T00:28:39.383727823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:39.470530 containerd[1590]: time="2026-01-17T00:28:39.470469394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wjv9s,Uid:1e94b669-209f-45fa-8242-66f711f5454d,Namespace:kube-system,Attempt:0,} returns sandbox id \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\"" Jan 17 00:28:39.475377 containerd[1590]: time="2026-01-17T00:28:39.474454094Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 00:28:39.481140 containerd[1590]: time="2026-01-17T00:28:39.481097171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cg9v6,Uid:fa31d5bd-dbfa-4f69-bf0a-01ea8443dfb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"03ac285bfe6e7460cc63714d15eb34cad6bccbe78cbb47b451317786c4866a6e\"" Jan 17 00:28:39.485471 containerd[1590]: time="2026-01-17T00:28:39.485426654Z" level=info msg="CreateContainer within sandbox \"03ac285bfe6e7460cc63714d15eb34cad6bccbe78cbb47b451317786c4866a6e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:28:39.506691 containerd[1590]: time="2026-01-17T00:28:39.506621918Z" level=info msg="CreateContainer within sandbox \"03ac285bfe6e7460cc63714d15eb34cad6bccbe78cbb47b451317786c4866a6e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c8572df4925062f7a30fa1b52b89846f08da77c4c9994fa67521f1f7ca48d2ec\"" Jan 17 00:28:39.507796 containerd[1590]: time="2026-01-17T00:28:39.507740970Z" level=info msg="StartContainer for \"c8572df4925062f7a30fa1b52b89846f08da77c4c9994fa67521f1f7ca48d2ec\"" Jan 17 00:28:39.536982 containerd[1590]: time="2026-01-17T00:28:39.536922328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kfbkj,Uid:24b9047a-1695-449e-8eeb-5f48dc4d82ce,Namespace:kube-system,Attempt:0,}" Jan 17 00:28:39.581366 containerd[1590]: time="2026-01-17T00:28:39.580986196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:39.581366 containerd[1590]: time="2026-01-17T00:28:39.581058315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:39.581366 containerd[1590]: time="2026-01-17T00:28:39.581077067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:39.582754 containerd[1590]: time="2026-01-17T00:28:39.582151612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:39.610241 containerd[1590]: time="2026-01-17T00:28:39.610100153Z" level=info msg="StartContainer for \"c8572df4925062f7a30fa1b52b89846f08da77c4c9994fa67521f1f7ca48d2ec\" returns successfully" Jan 17 00:28:39.716813 containerd[1590]: time="2026-01-17T00:28:39.714572066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kfbkj,Uid:24b9047a-1695-449e-8eeb-5f48dc4d82ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49\"" Jan 17 00:28:44.025678 kubelet[2744]: I0117 00:28:44.025595 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cg9v6" podStartSLOduration=6.025574568 podStartE2EDuration="6.025574568s" podCreationTimestamp="2026-01-17 00:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:28:40.376256815 +0000 UTC m=+6.369004518" watchObservedRunningTime="2026-01-17 00:28:44.025574568 +0000 UTC m=+10.018322272" Jan 17 00:28:45.179054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3873816951.mount: Deactivated successfully. Jan 17 00:28:48.855819 containerd[1590]: time="2026-01-17T00:28:48.855743299Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:48.857432 containerd[1590]: time="2026-01-17T00:28:48.857353794Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 17 00:28:48.860490 containerd[1590]: time="2026-01-17T00:28:48.858641435Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:48.861532 containerd[1590]: time="2026-01-17T00:28:48.861360413Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.386831602s" Jan 17 00:28:48.861532 containerd[1590]: time="2026-01-17T00:28:48.861410080Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 00:28:48.865055 containerd[1590]: time="2026-01-17T00:28:48.864668207Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 00:28:48.866690 containerd[1590]: time="2026-01-17T00:28:48.865564840Z" level=info msg="CreateContainer within sandbox \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:28:48.894882 containerd[1590]: time="2026-01-17T00:28:48.894818168Z" level=info msg="CreateContainer within sandbox \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff\"" Jan 17 00:28:48.898274 containerd[1590]: time="2026-01-17T00:28:48.895563615Z" level=info msg="StartContainer for \"aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff\"" Jan 17 00:28:48.974546 containerd[1590]: time="2026-01-17T00:28:48.974481164Z" level=info msg="StartContainer for \"aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff\" returns successfully" Jan 17 00:28:49.885799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff-rootfs.mount: Deactivated successfully. Jan 17 00:28:50.826882 containerd[1590]: time="2026-01-17T00:28:50.826561296Z" level=info msg="shim disconnected" id=aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff namespace=k8s.io Jan 17 00:28:50.826882 containerd[1590]: time="2026-01-17T00:28:50.826634384Z" level=warning msg="cleaning up after shim disconnected" id=aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff namespace=k8s.io Jan 17 00:28:50.826882 containerd[1590]: time="2026-01-17T00:28:50.826650179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:28:51.235173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2560990689.mount: Deactivated successfully. Jan 17 00:28:51.405148 containerd[1590]: time="2026-01-17T00:28:51.405083205Z" level=info msg="CreateContainer within sandbox \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:28:51.438729 containerd[1590]: time="2026-01-17T00:28:51.437805200Z" level=info msg="CreateContainer within sandbox \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904\"" Jan 17 00:28:51.440436 containerd[1590]: time="2026-01-17T00:28:51.440394060Z" level=info msg="StartContainer for \"df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904\"" Jan 17 00:28:51.550991 containerd[1590]: time="2026-01-17T00:28:51.550837050Z" level=info msg="StartContainer for \"df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904\" returns successfully" Jan 17 00:28:51.579643 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:28:51.581737 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:28:51.581854 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:28:51.588916 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:28:51.641029 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:28:51.678408 containerd[1590]: time="2026-01-17T00:28:51.678322130Z" level=info msg="shim disconnected" id=df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904 namespace=k8s.io Jan 17 00:28:51.678408 containerd[1590]: time="2026-01-17T00:28:51.678403606Z" level=warning msg="cleaning up after shim disconnected" id=df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904 namespace=k8s.io Jan 17 00:28:51.678408 containerd[1590]: time="2026-01-17T00:28:51.678417300Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:28:51.706795 containerd[1590]: time="2026-01-17T00:28:51.705826140Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:28:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:28:52.223984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904-rootfs.mount: Deactivated successfully. Jan 17 00:28:52.394822 containerd[1590]: time="2026-01-17T00:28:52.394758684Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:52.396434 containerd[1590]: time="2026-01-17T00:28:52.396125335Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 17 00:28:52.398745 containerd[1590]: time="2026-01-17T00:28:52.397972053Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:52.401531 containerd[1590]: time="2026-01-17T00:28:52.401133281Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.535357168s" Jan 17 00:28:52.401531 containerd[1590]: time="2026-01-17T00:28:52.401179703Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 00:28:52.406753 containerd[1590]: time="2026-01-17T00:28:52.405879967Z" level=info msg="CreateContainer within sandbox \"e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 00:28:52.410644 containerd[1590]: time="2026-01-17T00:28:52.410594901Z" level=info msg="CreateContainer within sandbox \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:28:52.458221 containerd[1590]: time="2026-01-17T00:28:52.458169441Z" level=info msg="CreateContainer within sandbox \"e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583\"" Jan 17 00:28:52.463401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount272651898.mount: Deactivated successfully. Jan 17 00:28:52.465826 containerd[1590]: time="2026-01-17T00:28:52.465577108Z" level=info msg="StartContainer for \"9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583\"" Jan 17 00:28:52.468734 containerd[1590]: time="2026-01-17T00:28:52.467258866Z" level=info msg="CreateContainer within sandbox \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49\"" Jan 17 00:28:52.468734 containerd[1590]: time="2026-01-17T00:28:52.468045358Z" level=info msg="StartContainer for \"fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49\"" Jan 17 00:28:52.599436 containerd[1590]: time="2026-01-17T00:28:52.598895118Z" level=info msg="StartContainer for \"fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49\" returns successfully" Jan 17 00:28:52.621592 containerd[1590]: time="2026-01-17T00:28:52.621541732Z" level=info msg="StartContainer for \"9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583\" returns successfully" Jan 17 00:28:52.814894 containerd[1590]: time="2026-01-17T00:28:52.814805606Z" level=info msg="shim disconnected" id=fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49 namespace=k8s.io Jan 17 00:28:52.814894 containerd[1590]: time="2026-01-17T00:28:52.814894151Z" level=warning msg="cleaning up after shim disconnected" id=fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49 namespace=k8s.io Jan 17 00:28:52.814894 containerd[1590]: time="2026-01-17T00:28:52.814908588Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:28:53.432112 containerd[1590]: time="2026-01-17T00:28:53.431935404Z" level=info msg="CreateContainer within sandbox \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:28:53.465738 containerd[1590]: time="2026-01-17T00:28:53.464725089Z" level=info msg="CreateContainer within sandbox \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2\"" Jan 17 00:28:53.470737 containerd[1590]: time="2026-01-17T00:28:53.468112797Z" level=info msg="StartContainer for \"73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2\"" Jan 17 00:28:53.719608 kubelet[2744]: I0117 00:28:53.719444 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-kfbkj" podStartSLOduration=2.033952594 podStartE2EDuration="14.719419375s" podCreationTimestamp="2026-01-17 00:28:39 +0000 UTC" firstStartedPulling="2026-01-17 00:28:39.717025165 +0000 UTC m=+5.709772897" lastFinishedPulling="2026-01-17 00:28:52.402491981 +0000 UTC m=+18.395239678" observedRunningTime="2026-01-17 00:28:53.484165277 +0000 UTC m=+19.476912981" watchObservedRunningTime="2026-01-17 00:28:53.719419375 +0000 UTC m=+19.712167083" Jan 17 00:28:53.750891 containerd[1590]: time="2026-01-17T00:28:53.748740890Z" level=info msg="StartContainer for \"73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2\" returns successfully" Jan 17 00:28:53.804902 containerd[1590]: time="2026-01-17T00:28:53.804773010Z" level=info msg="shim disconnected" id=73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2 namespace=k8s.io Jan 17 00:28:53.804902 containerd[1590]: time="2026-01-17T00:28:53.804872909Z" level=warning msg="cleaning up after shim disconnected" id=73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2 namespace=k8s.io Jan 17 00:28:53.804902 containerd[1590]: time="2026-01-17T00:28:53.805012717Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:28:54.221891 systemd[1]: run-containerd-runc-k8s.io-73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2-runc.DIp4T3.mount: Deactivated successfully. Jan 17 00:28:54.222161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2-rootfs.mount: Deactivated successfully. Jan 17 00:28:54.437030 containerd[1590]: time="2026-01-17T00:28:54.436954956Z" level=info msg="CreateContainer within sandbox \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:28:54.463889 containerd[1590]: time="2026-01-17T00:28:54.463690705Z" level=info msg="CreateContainer within sandbox \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c\"" Jan 17 00:28:54.466493 containerd[1590]: time="2026-01-17T00:28:54.466455176Z" level=info msg="StartContainer for \"b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c\"" Jan 17 00:28:54.572789 containerd[1590]: time="2026-01-17T00:28:54.572667337Z" level=info msg="StartContainer for \"b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c\" returns successfully" Jan 17 00:28:54.760376 kubelet[2744]: I0117 00:28:54.760334 2744 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:28:54.854125 kubelet[2744]: I0117 00:28:54.853927 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69a887f6-094c-4a17-85fc-c047a3553b59-config-volume\") pod \"coredns-668d6bf9bc-qgh2m\" (UID: \"69a887f6-094c-4a17-85fc-c047a3553b59\") " pod="kube-system/coredns-668d6bf9bc-qgh2m" Jan 17 00:28:54.854125 kubelet[2744]: I0117 00:28:54.854038 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5f94\" (UniqueName: \"kubernetes.io/projected/53c44c08-cf1e-41a9-899d-57efb27060e8-kube-api-access-t5f94\") pod \"coredns-668d6bf9bc-xtjcb\" (UID: \"53c44c08-cf1e-41a9-899d-57efb27060e8\") " pod="kube-system/coredns-668d6bf9bc-xtjcb" Jan 17 00:28:54.854125 kubelet[2744]: I0117 00:28:54.854072 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lplgq\" (UniqueName: \"kubernetes.io/projected/69a887f6-094c-4a17-85fc-c047a3553b59-kube-api-access-lplgq\") pod \"coredns-668d6bf9bc-qgh2m\" (UID: \"69a887f6-094c-4a17-85fc-c047a3553b59\") " pod="kube-system/coredns-668d6bf9bc-qgh2m" Jan 17 00:28:54.854393 kubelet[2744]: I0117 00:28:54.854138 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53c44c08-cf1e-41a9-899d-57efb27060e8-config-volume\") pod \"coredns-668d6bf9bc-xtjcb\" (UID: \"53c44c08-cf1e-41a9-899d-57efb27060e8\") " pod="kube-system/coredns-668d6bf9bc-xtjcb" Jan 17 00:28:55.144177 containerd[1590]: time="2026-01-17T00:28:55.143877357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xtjcb,Uid:53c44c08-cf1e-41a9-899d-57efb27060e8,Namespace:kube-system,Attempt:0,}" Jan 17 00:28:55.150124 containerd[1590]: time="2026-01-17T00:28:55.149987163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qgh2m,Uid:69a887f6-094c-4a17-85fc-c047a3553b59,Namespace:kube-system,Attempt:0,}" Jan 17 00:28:55.255565 systemd[1]: run-containerd-runc-k8s.io-b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c-runc.iIDz8S.mount: Deactivated successfully. Jan 17 00:28:55.484427 kubelet[2744]: I0117 00:28:55.482366 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wjv9s" podStartSLOduration=8.092188558 podStartE2EDuration="17.482341665s" podCreationTimestamp="2026-01-17 00:28:38 +0000 UTC" firstStartedPulling="2026-01-17 00:28:39.47322731 +0000 UTC m=+5.465975009" lastFinishedPulling="2026-01-17 00:28:48.86338042 +0000 UTC m=+14.856128116" observedRunningTime="2026-01-17 00:28:55.478294653 +0000 UTC m=+21.471042358" watchObservedRunningTime="2026-01-17 00:28:55.482341665 +0000 UTC m=+21.475089372" Jan 17 00:28:56.725345 systemd-networkd[1221]: cilium_host: Link UP Jan 17 00:28:56.725624 systemd-networkd[1221]: cilium_net: Link UP Jan 17 00:28:56.726003 systemd-networkd[1221]: cilium_net: Gained carrier Jan 17 00:28:56.726265 systemd-networkd[1221]: cilium_host: Gained carrier Jan 17 00:28:56.731836 systemd-networkd[1221]: cilium_net: Gained IPv6LL Jan 17 00:28:56.880801 systemd-networkd[1221]: cilium_vxlan: Link UP Jan 17 00:28:56.882477 systemd-networkd[1221]: cilium_vxlan: Gained carrier Jan 17 00:28:57.162749 kernel: NET: Registered PF_ALG protocol family Jan 17 00:28:57.427464 systemd-networkd[1221]: cilium_host: Gained IPv6LL Jan 17 00:28:58.053823 systemd-networkd[1221]: lxc_health: Link UP Jan 17 00:28:58.064049 systemd-networkd[1221]: lxc_health: Gained carrier Jan 17 00:28:58.294725 systemd-networkd[1221]: lxc89e2542e423c: Link UP Jan 17 00:28:58.300920 kernel: eth0: renamed from tmp3615d Jan 17 00:28:58.317649 systemd-networkd[1221]: lxcb7c69a3d3229: Link UP Jan 17 00:28:58.329636 systemd-networkd[1221]: lxc89e2542e423c: Gained carrier Jan 17 00:28:58.329861 kernel: eth0: renamed from tmp1443d Jan 17 00:28:58.346622 systemd-networkd[1221]: lxcb7c69a3d3229: Gained carrier Jan 17 00:28:58.451331 systemd-networkd[1221]: cilium_vxlan: Gained IPv6LL Jan 17 00:28:59.282896 systemd-networkd[1221]: lxc_health: Gained IPv6LL Jan 17 00:28:59.538886 systemd-networkd[1221]: lxc89e2542e423c: Gained IPv6LL Jan 17 00:29:00.115141 systemd-networkd[1221]: lxcb7c69a3d3229: Gained IPv6LL Jan 17 00:29:02.279650 ntpd[1537]: Listen normally on 6 cilium_host 192.168.0.154:123 Jan 17 00:29:02.280878 ntpd[1537]: 17 Jan 00:29:02 ntpd[1537]: Listen normally on 6 cilium_host 192.168.0.154:123 Jan 17 00:29:02.280878 ntpd[1537]: 17 Jan 00:29:02 ntpd[1537]: Listen normally on 7 cilium_net [fe80::bc0b:2bff:fe4c:57b9%4]:123 Jan 17 00:29:02.280878 ntpd[1537]: 17 Jan 00:29:02 ntpd[1537]: Listen normally on 8 cilium_host [fe80::458:7ff:fe59:ab65%5]:123 Jan 17 00:29:02.280878 ntpd[1537]: 17 Jan 00:29:02 ntpd[1537]: Listen normally on 9 cilium_vxlan [fe80::6035:78ff:fefa:a119%6]:123 Jan 17 00:29:02.280878 ntpd[1537]: 17 Jan 00:29:02 ntpd[1537]: Listen normally on 10 lxc_health [fe80::30b3:2ff:fe63:af7%8]:123 Jan 17 00:29:02.280878 ntpd[1537]: 17 Jan 00:29:02 ntpd[1537]: Listen normally on 11 lxc89e2542e423c [fe80::5c74:f1ff:fe22:1e50%10]:123 Jan 17 00:29:02.280878 ntpd[1537]: 17 Jan 00:29:02 ntpd[1537]: Listen normally on 12 lxcb7c69a3d3229 [fe80::9c31:56ff:fe65:e296%12]:123 Jan 17 00:29:02.279822 ntpd[1537]: Listen normally on 7 cilium_net [fe80::bc0b:2bff:fe4c:57b9%4]:123 Jan 17 00:29:02.279909 ntpd[1537]: Listen normally on 8 cilium_host [fe80::458:7ff:fe59:ab65%5]:123 Jan 17 00:29:02.279970 ntpd[1537]: Listen normally on 9 cilium_vxlan [fe80::6035:78ff:fefa:a119%6]:123 Jan 17 00:29:02.280026 ntpd[1537]: Listen normally on 10 lxc_health [fe80::30b3:2ff:fe63:af7%8]:123 Jan 17 00:29:02.280097 ntpd[1537]: Listen normally on 11 lxc89e2542e423c [fe80::5c74:f1ff:fe22:1e50%10]:123 Jan 17 00:29:02.280152 ntpd[1537]: Listen normally on 12 lxcb7c69a3d3229 [fe80::9c31:56ff:fe65:e296%12]:123 Jan 17 00:29:03.565058 containerd[1590]: time="2026-01-17T00:29:03.564585196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:03.565058 containerd[1590]: time="2026-01-17T00:29:03.564663725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:03.565058 containerd[1590]: time="2026-01-17T00:29:03.564691901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:03.565058 containerd[1590]: time="2026-01-17T00:29:03.564890831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:03.593238 containerd[1590]: time="2026-01-17T00:29:03.591533211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:03.593238 containerd[1590]: time="2026-01-17T00:29:03.591613117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:03.593238 containerd[1590]: time="2026-01-17T00:29:03.591642228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:03.593238 containerd[1590]: time="2026-01-17T00:29:03.592147757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:03.672240 systemd[1]: run-containerd-runc-k8s.io-1443d59fa7a20085d8ba521befc7f059f02b600547dfe2e8e9cce251c7a99fbc-runc.Ylg4zP.mount: Deactivated successfully. Jan 17 00:29:03.768390 containerd[1590]: time="2026-01-17T00:29:03.768340627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xtjcb,Uid:53c44c08-cf1e-41a9-899d-57efb27060e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"3615dd901bd1c6c09414dc1047a0ab1171b743e9ab91b999f32cafc6098470b7\"" Jan 17 00:29:03.780462 containerd[1590]: time="2026-01-17T00:29:03.780174487Z" level=info msg="CreateContainer within sandbox \"3615dd901bd1c6c09414dc1047a0ab1171b743e9ab91b999f32cafc6098470b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:29:03.827493 containerd[1590]: time="2026-01-17T00:29:03.827061761Z" level=info msg="CreateContainer within sandbox \"3615dd901bd1c6c09414dc1047a0ab1171b743e9ab91b999f32cafc6098470b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12821e8fee0b56e393a20fe0a66fe1609110788aeb3e3a3b92172c7a1acba0b5\"" Jan 17 00:29:03.833432 containerd[1590]: time="2026-01-17T00:29:03.831176916Z" level=info msg="StartContainer for \"12821e8fee0b56e393a20fe0a66fe1609110788aeb3e3a3b92172c7a1acba0b5\"" Jan 17 00:29:03.842542 containerd[1590]: time="2026-01-17T00:29:03.842425445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qgh2m,Uid:69a887f6-094c-4a17-85fc-c047a3553b59,Namespace:kube-system,Attempt:0,} returns sandbox id \"1443d59fa7a20085d8ba521befc7f059f02b600547dfe2e8e9cce251c7a99fbc\"" Jan 17 00:29:03.849474 containerd[1590]: time="2026-01-17T00:29:03.849402649Z" level=info msg="CreateContainer within sandbox \"1443d59fa7a20085d8ba521befc7f059f02b600547dfe2e8e9cce251c7a99fbc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:29:03.868838 containerd[1590]: time="2026-01-17T00:29:03.868485535Z" level=info msg="CreateContainer within sandbox \"1443d59fa7a20085d8ba521befc7f059f02b600547dfe2e8e9cce251c7a99fbc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"de9b1093d21ff414bbd871cf23c2457760799f0fee47bad8699ab30aaa082bb0\"" Jan 17 00:29:03.869648 containerd[1590]: time="2026-01-17T00:29:03.869470110Z" level=info msg="StartContainer for \"de9b1093d21ff414bbd871cf23c2457760799f0fee47bad8699ab30aaa082bb0\"" Jan 17 00:29:03.935148 containerd[1590]: time="2026-01-17T00:29:03.935088375Z" level=info msg="StartContainer for \"12821e8fee0b56e393a20fe0a66fe1609110788aeb3e3a3b92172c7a1acba0b5\" returns successfully" Jan 17 00:29:03.977061 containerd[1590]: time="2026-01-17T00:29:03.976989771Z" level=info msg="StartContainer for \"de9b1093d21ff414bbd871cf23c2457760799f0fee47bad8699ab30aaa082bb0\" returns successfully" Jan 17 00:29:04.518534 kubelet[2744]: I0117 00:29:04.518439 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qgh2m" podStartSLOduration=25.518410595 podStartE2EDuration="25.518410595s" podCreationTimestamp="2026-01-17 00:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:29:04.500386285 +0000 UTC m=+30.493133993" watchObservedRunningTime="2026-01-17 00:29:04.518410595 +0000 UTC m=+30.511158305" Jan 17 00:29:04.544108 kubelet[2744]: I0117 00:29:04.544013 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xtjcb" podStartSLOduration=25.543987224 podStartE2EDuration="25.543987224s" podCreationTimestamp="2026-01-17 00:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:29:04.521632269 +0000 UTC m=+30.514379975" watchObservedRunningTime="2026-01-17 00:29:04.543987224 +0000 UTC m=+30.536734935" Jan 17 00:29:29.064122 systemd[1]: Started sshd@7-10.128.0.62:22-4.153.228.146:48906.service - OpenSSH per-connection server daemon (4.153.228.146:48906). Jan 17 00:29:29.284734 sshd[4132]: Accepted publickey for core from 4.153.228.146 port 48906 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:29:29.286750 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:29.293913 systemd-logind[1574]: New session 8 of user core. Jan 17 00:29:29.302082 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:29:29.574248 sshd[4132]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:29.579123 systemd[1]: sshd@7-10.128.0.62:22-4.153.228.146:48906.service: Deactivated successfully. Jan 17 00:29:29.590983 systemd-logind[1574]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:29:29.591925 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:29:29.597617 systemd-logind[1574]: Removed session 8. Jan 17 00:29:34.618164 systemd[1]: Started sshd@8-10.128.0.62:22-4.153.228.146:53966.service - OpenSSH per-connection server daemon (4.153.228.146:53966). Jan 17 00:29:34.864622 sshd[4149]: Accepted publickey for core from 4.153.228.146 port 53966 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:29:34.866562 sshd[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:34.872656 systemd-logind[1574]: New session 9 of user core. Jan 17 00:29:34.884305 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:29:35.125401 sshd[4149]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:35.132014 systemd[1]: sshd@8-10.128.0.62:22-4.153.228.146:53966.service: Deactivated successfully. Jan 17 00:29:35.138133 systemd-logind[1574]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:29:35.138957 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:29:35.141203 systemd-logind[1574]: Removed session 9. Jan 17 00:29:40.163117 systemd[1]: Started sshd@9-10.128.0.62:22-4.153.228.146:53980.service - OpenSSH per-connection server daemon (4.153.228.146:53980). Jan 17 00:29:40.380103 sshd[4166]: Accepted publickey for core from 4.153.228.146 port 53980 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:29:40.382002 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:40.389143 systemd-logind[1574]: New session 10 of user core. Jan 17 00:29:40.394055 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:29:40.631613 sshd[4166]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:40.640331 systemd[1]: sshd@9-10.128.0.62:22-4.153.228.146:53980.service: Deactivated successfully. Jan 17 00:29:40.647032 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:29:40.648397 systemd-logind[1574]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:29:40.650749 systemd-logind[1574]: Removed session 10. Jan 17 00:29:45.675227 systemd[1]: Started sshd@10-10.128.0.62:22-4.153.228.146:38500.service - OpenSSH per-connection server daemon (4.153.228.146:38500). Jan 17 00:29:45.937571 sshd[4181]: Accepted publickey for core from 4.153.228.146 port 38500 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:29:45.940345 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:45.947599 systemd-logind[1574]: New session 11 of user core. Jan 17 00:29:45.955198 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:29:46.198912 sshd[4181]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:46.204968 systemd[1]: sshd@10-10.128.0.62:22-4.153.228.146:38500.service: Deactivated successfully. Jan 17 00:29:46.211107 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:29:46.211327 systemd-logind[1574]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:29:46.213977 systemd-logind[1574]: Removed session 11. Jan 17 00:29:51.236549 systemd[1]: Started sshd@11-10.128.0.62:22-4.153.228.146:38510.service - OpenSSH per-connection server daemon (4.153.228.146:38510). Jan 17 00:29:51.465672 sshd[4196]: Accepted publickey for core from 4.153.228.146 port 38510 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:29:51.467613 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:51.474370 systemd-logind[1574]: New session 12 of user core. Jan 17 00:29:51.479099 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:29:51.707668 sshd[4196]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:51.714977 systemd[1]: sshd@11-10.128.0.62:22-4.153.228.146:38510.service: Deactivated successfully. Jan 17 00:29:51.720085 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:29:51.721383 systemd-logind[1574]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:29:51.722999 systemd-logind[1574]: Removed session 12. Jan 17 00:29:51.754468 systemd[1]: Started sshd@12-10.128.0.62:22-4.153.228.146:38518.service - OpenSSH per-connection server daemon (4.153.228.146:38518). Jan 17 00:29:52.001648 sshd[4210]: Accepted publickey for core from 4.153.228.146 port 38518 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:29:52.002551 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:52.008438 systemd-logind[1574]: New session 13 of user core. Jan 17 00:29:52.014635 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:29:52.329520 sshd[4210]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:52.340996 systemd[1]: sshd@12-10.128.0.62:22-4.153.228.146:38518.service: Deactivated successfully. Jan 17 00:29:52.351513 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:29:52.353374 systemd-logind[1574]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:29:52.355341 systemd-logind[1574]: Removed session 13. Jan 17 00:29:52.365449 systemd[1]: Started sshd@13-10.128.0.62:22-4.153.228.146:38528.service - OpenSSH per-connection server daemon (4.153.228.146:38528). Jan 17 00:29:52.589173 sshd[4223]: Accepted publickey for core from 4.153.228.146 port 38528 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:29:52.590886 sshd[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:52.599135 systemd-logind[1574]: New session 14 of user core. Jan 17 00:29:52.605191 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:29:52.843479 sshd[4223]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:52.849606 systemd[1]: sshd@13-10.128.0.62:22-4.153.228.146:38528.service: Deactivated successfully. Jan 17 00:29:52.855004 systemd-logind[1574]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:29:52.855877 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:29:52.858296 systemd-logind[1574]: Removed session 14. Jan 17 00:29:57.883412 systemd[1]: Started sshd@14-10.128.0.62:22-4.153.228.146:47066.service - OpenSSH per-connection server daemon (4.153.228.146:47066). Jan 17 00:29:58.113767 sshd[4237]: Accepted publickey for core from 4.153.228.146 port 47066 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:29:58.115663 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:58.121474 systemd-logind[1574]: New session 15 of user core. Jan 17 00:29:58.127176 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:29:58.356568 sshd[4237]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:58.362241 systemd[1]: sshd@14-10.128.0.62:22-4.153.228.146:47066.service: Deactivated successfully. Jan 17 00:29:58.370960 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:29:58.373094 systemd-logind[1574]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:29:58.376159 systemd-logind[1574]: Removed session 15. Jan 17 00:30:03.395580 systemd[1]: Started sshd@15-10.128.0.62:22-4.153.228.146:47068.service - OpenSSH per-connection server daemon (4.153.228.146:47068). Jan 17 00:30:03.618740 sshd[4251]: Accepted publickey for core from 4.153.228.146 port 47068 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:30:03.621301 sshd[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:03.628389 systemd-logind[1574]: New session 16 of user core. Jan 17 00:30:03.636087 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:30:03.866436 sshd[4251]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:03.871215 systemd[1]: sshd@15-10.128.0.62:22-4.153.228.146:47068.service: Deactivated successfully. Jan 17 00:30:03.878831 systemd-logind[1574]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:30:03.882080 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:30:03.884819 systemd-logind[1574]: Removed session 16. Jan 17 00:30:03.902493 systemd[1]: Started sshd@16-10.128.0.62:22-4.153.228.146:47084.service - OpenSSH per-connection server daemon (4.153.228.146:47084). Jan 17 00:30:04.124341 sshd[4265]: Accepted publickey for core from 4.153.228.146 port 47084 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:30:04.126581 sshd[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:04.132688 systemd-logind[1574]: New session 17 of user core. Jan 17 00:30:04.139099 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:30:04.430883 sshd[4265]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:04.436110 systemd[1]: sshd@16-10.128.0.62:22-4.153.228.146:47084.service: Deactivated successfully. Jan 17 00:30:04.443643 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:30:04.446444 systemd-logind[1574]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:30:04.448368 systemd-logind[1574]: Removed session 17. Jan 17 00:30:04.469513 systemd[1]: Started sshd@17-10.128.0.62:22-4.153.228.146:49842.service - OpenSSH per-connection server daemon (4.153.228.146:49842). Jan 17 00:30:04.686893 sshd[4277]: Accepted publickey for core from 4.153.228.146 port 49842 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:30:04.689137 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:04.695747 systemd-logind[1574]: New session 18 of user core. Jan 17 00:30:04.706088 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:30:05.511450 sshd[4277]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:05.523171 systemd[1]: sshd@17-10.128.0.62:22-4.153.228.146:49842.service: Deactivated successfully. Jan 17 00:30:05.531106 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:30:05.532834 systemd-logind[1574]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:30:05.535028 systemd-logind[1574]: Removed session 18. Jan 17 00:30:05.551089 systemd[1]: Started sshd@18-10.128.0.62:22-4.153.228.146:49858.service - OpenSSH per-connection server daemon (4.153.228.146:49858). Jan 17 00:30:05.780971 sshd[4296]: Accepted publickey for core from 4.153.228.146 port 49858 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:30:05.782253 sshd[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:05.788783 systemd-logind[1574]: New session 19 of user core. Jan 17 00:30:05.796072 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:30:06.164897 sshd[4296]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:06.170433 systemd[1]: sshd@18-10.128.0.62:22-4.153.228.146:49858.service: Deactivated successfully. Jan 17 00:30:06.179033 systemd-logind[1574]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:30:06.180099 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:30:06.183071 systemd-logind[1574]: Removed session 19. Jan 17 00:30:06.201118 systemd[1]: Started sshd@19-10.128.0.62:22-4.153.228.146:49864.service - OpenSSH per-connection server daemon (4.153.228.146:49864). Jan 17 00:30:06.420181 sshd[4307]: Accepted publickey for core from 4.153.228.146 port 49864 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:30:06.422254 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:06.428680 systemd-logind[1574]: New session 20 of user core. Jan 17 00:30:06.440224 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:30:06.659624 sshd[4307]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:06.664516 systemd[1]: sshd@19-10.128.0.62:22-4.153.228.146:49864.service: Deactivated successfully. Jan 17 00:30:06.671241 systemd-logind[1574]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:30:06.672290 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:30:06.674697 systemd-logind[1574]: Removed session 20. Jan 17 00:30:11.696192 systemd[1]: Started sshd@20-10.128.0.62:22-4.153.228.146:49866.service - OpenSSH per-connection server daemon (4.153.228.146:49866). Jan 17 00:30:11.915873 sshd[4323]: Accepted publickey for core from 4.153.228.146 port 49866 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:30:11.917790 sshd[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:11.924399 systemd-logind[1574]: New session 21 of user core. Jan 17 00:30:11.930131 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:30:12.155997 sshd[4323]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:12.162826 systemd[1]: sshd@20-10.128.0.62:22-4.153.228.146:49866.service: Deactivated successfully. Jan 17 00:30:12.167899 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:30:12.168105 systemd-logind[1574]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:30:12.170281 systemd-logind[1574]: Removed session 21. Jan 17 00:30:17.193570 systemd[1]: Started sshd@21-10.128.0.62:22-4.153.228.146:46768.service - OpenSSH per-connection server daemon (4.153.228.146:46768). Jan 17 00:30:17.414884 sshd[4340]: Accepted publickey for core from 4.153.228.146 port 46768 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:30:17.417005 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:17.423309 systemd-logind[1574]: New session 22 of user core. Jan 17 00:30:17.432224 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:30:17.655393 sshd[4340]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:17.661541 systemd[1]: sshd@21-10.128.0.62:22-4.153.228.146:46768.service: Deactivated successfully. Jan 17 00:30:17.667083 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:30:17.668549 systemd-logind[1574]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:30:17.670287 systemd-logind[1574]: Removed session 22. Jan 17 00:30:22.694143 systemd[1]: Started sshd@22-10.128.0.62:22-4.153.228.146:46780.service - OpenSSH per-connection server daemon (4.153.228.146:46780). Jan 17 00:30:22.911458 sshd[4353]: Accepted publickey for core from 4.153.228.146 port 46780 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:30:22.914046 sshd[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:22.920564 systemd-logind[1574]: New session 23 of user core. Jan 17 00:30:22.928099 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:30:23.156614 sshd[4353]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:23.161691 systemd[1]: sshd@22-10.128.0.62:22-4.153.228.146:46780.service: Deactivated successfully. Jan 17 00:30:23.169312 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:30:23.170956 systemd-logind[1574]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:30:23.172584 systemd-logind[1574]: Removed session 23. Jan 17 00:30:28.194759 systemd[1]: Started sshd@23-10.128.0.62:22-4.153.228.146:44438.service - OpenSSH per-connection server daemon (4.153.228.146:44438). Jan 17 00:30:28.419424 sshd[4367]: Accepted publickey for core from 4.153.228.146 port 44438 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:30:28.421371 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:28.427659 systemd-logind[1574]: New session 24 of user core. Jan 17 00:30:28.438158 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:30:28.656828 sshd[4367]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:28.663493 systemd[1]: sshd@23-10.128.0.62:22-4.153.228.146:44438.service: Deactivated successfully. Jan 17 00:30:28.668671 systemd-logind[1574]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:30:28.669148 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:30:28.672330 systemd-logind[1574]: Removed session 24. Jan 17 00:30:28.695167 systemd[1]: Started sshd@24-10.128.0.62:22-4.153.228.146:44448.service - OpenSSH per-connection server daemon (4.153.228.146:44448). Jan 17 00:30:28.911420 sshd[4381]: Accepted publickey for core from 4.153.228.146 port 44448 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:30:28.913406 sshd[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:28.919390 systemd-logind[1574]: New session 25 of user core. Jan 17 00:30:28.926097 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:30:30.562994 containerd[1590]: time="2026-01-17T00:30:30.562664398Z" level=info msg="StopContainer for \"9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583\" with timeout 30 (s)" Jan 17 00:30:30.566343 containerd[1590]: time="2026-01-17T00:30:30.566290364Z" level=info msg="Stop container \"9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583\" with signal terminated" Jan 17 00:30:30.620562 containerd[1590]: time="2026-01-17T00:30:30.620385151Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:30:30.633924 containerd[1590]: time="2026-01-17T00:30:30.633867783Z" level=info msg="StopContainer for \"b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c\" with timeout 2 (s)" Jan 17 00:30:30.634432 containerd[1590]: time="2026-01-17T00:30:30.634400431Z" level=info msg="Stop container \"b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c\" with signal terminated" Jan 17 00:30:30.650421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583-rootfs.mount: Deactivated successfully. Jan 17 00:30:30.657489 systemd-networkd[1221]: lxc_health: Link DOWN Jan 17 00:30:30.658263 systemd-networkd[1221]: lxc_health: Lost carrier Jan 17 00:30:30.688234 containerd[1590]: time="2026-01-17T00:30:30.687847673Z" level=info msg="shim disconnected" id=9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583 namespace=k8s.io Jan 17 00:30:30.688234 containerd[1590]: time="2026-01-17T00:30:30.687926955Z" level=warning msg="cleaning up after shim disconnected" id=9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583 namespace=k8s.io Jan 17 00:30:30.688234 containerd[1590]: time="2026-01-17T00:30:30.687942010Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:30.726080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c-rootfs.mount: Deactivated successfully. Jan 17 00:30:30.731640 containerd[1590]: time="2026-01-17T00:30:30.731539343Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:30:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:30:30.737173 containerd[1590]: time="2026-01-17T00:30:30.737114785Z" level=info msg="StopContainer for \"9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583\" returns successfully" Jan 17 00:30:30.738223 containerd[1590]: time="2026-01-17T00:30:30.738183731Z" level=info msg="StopPodSandbox for \"e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49\"" Jan 17 00:30:30.738363 containerd[1590]: time="2026-01-17T00:30:30.738240664Z" level=info msg="Container to stop \"9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:30:30.740931 containerd[1590]: time="2026-01-17T00:30:30.740861468Z" level=info msg="shim disconnected" id=b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c namespace=k8s.io Jan 17 00:30:30.741049 containerd[1590]: time="2026-01-17T00:30:30.740934187Z" level=warning msg="cleaning up after shim disconnected" id=b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c namespace=k8s.io Jan 17 00:30:30.741049 containerd[1590]: time="2026-01-17T00:30:30.740951482Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:30.742940 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49-shm.mount: Deactivated successfully. Jan 17 00:30:30.776532 containerd[1590]: time="2026-01-17T00:30:30.776270296Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:30:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:30:30.782244 containerd[1590]: time="2026-01-17T00:30:30.782083264Z" level=info msg="StopContainer for \"b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c\" returns successfully" Jan 17 00:30:30.782771 containerd[1590]: time="2026-01-17T00:30:30.782732148Z" level=info msg="StopPodSandbox for \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\"" Jan 17 00:30:30.782921 containerd[1590]: time="2026-01-17T00:30:30.782795419Z" level=info msg="Container to stop \"df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:30:30.782921 containerd[1590]: time="2026-01-17T00:30:30.782816845Z" level=info msg="Container to stop \"fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:30:30.782921 containerd[1590]: time="2026-01-17T00:30:30.782833899Z" level=info msg="Container to stop \"73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:30:30.782921 containerd[1590]: time="2026-01-17T00:30:30.782853167Z" level=info msg="Container to stop \"b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:30:30.782921 containerd[1590]: time="2026-01-17T00:30:30.782869984Z" level=info msg="Container to stop \"aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:30:30.790441 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd-shm.mount: Deactivated successfully. Jan 17 00:30:30.813215 containerd[1590]: time="2026-01-17T00:30:30.812945170Z" level=info msg="shim disconnected" id=e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49 namespace=k8s.io Jan 17 00:30:30.813215 containerd[1590]: time="2026-01-17T00:30:30.813019563Z" level=warning msg="cleaning up after shim disconnected" id=e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49 namespace=k8s.io Jan 17 00:30:30.813215 containerd[1590]: time="2026-01-17T00:30:30.813033541Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:30.859173 containerd[1590]: time="2026-01-17T00:30:30.857857895Z" level=info msg="TearDown network for sandbox \"e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49\" successfully" Jan 17 00:30:30.859173 containerd[1590]: time="2026-01-17T00:30:30.857902592Z" level=info msg="StopPodSandbox for \"e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49\" returns successfully" Jan 17 00:30:30.862515 containerd[1590]: time="2026-01-17T00:30:30.861984149Z" level=info msg="shim disconnected" id=386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd namespace=k8s.io Jan 17 00:30:30.862515 containerd[1590]: time="2026-01-17T00:30:30.862064870Z" level=warning msg="cleaning up after shim disconnected" id=386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd namespace=k8s.io Jan 17 00:30:30.862515 containerd[1590]: time="2026-01-17T00:30:30.862082291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:30.891115 containerd[1590]: time="2026-01-17T00:30:30.891005010Z" level=info msg="TearDown network for sandbox \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\" successfully" Jan 17 00:30:30.891115 containerd[1590]: time="2026-01-17T00:30:30.891051318Z" level=info msg="StopPodSandbox for \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\" returns successfully" Jan 17 00:30:31.016498 kubelet[2744]: I0117 00:30:31.016435 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-hostproc\") pod \"1e94b669-209f-45fa-8242-66f711f5454d\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " Jan 17 00:30:31.018730 kubelet[2744]: I0117 00:30:31.016545 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-hostproc" (OuterVolumeSpecName: "hostproc") pod "1e94b669-209f-45fa-8242-66f711f5454d" (UID: "1e94b669-209f-45fa-8242-66f711f5454d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:31.018730 kubelet[2744]: I0117 00:30:31.017338 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-bpf-maps\") pod \"1e94b669-209f-45fa-8242-66f711f5454d\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " Jan 17 00:30:31.018730 kubelet[2744]: I0117 00:30:31.017390 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh6q2\" (UniqueName: \"kubernetes.io/projected/1e94b669-209f-45fa-8242-66f711f5454d-kube-api-access-lh6q2\") pod \"1e94b669-209f-45fa-8242-66f711f5454d\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " Jan 17 00:30:31.018730 kubelet[2744]: I0117 00:30:31.017422 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1e94b669-209f-45fa-8242-66f711f5454d" (UID: "1e94b669-209f-45fa-8242-66f711f5454d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:31.018730 kubelet[2744]: I0117 00:30:31.017426 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e94b669-209f-45fa-8242-66f711f5454d-clustermesh-secrets\") pod \"1e94b669-209f-45fa-8242-66f711f5454d\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " Jan 17 00:30:31.019117 kubelet[2744]: I0117 00:30:31.017485 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fssz6\" (UniqueName: \"kubernetes.io/projected/24b9047a-1695-449e-8eeb-5f48dc4d82ce-kube-api-access-fssz6\") pod \"24b9047a-1695-449e-8eeb-5f48dc4d82ce\" (UID: \"24b9047a-1695-449e-8eeb-5f48dc4d82ce\") " Jan 17 00:30:31.019117 kubelet[2744]: I0117 00:30:31.017533 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-cilium-cgroup\") pod \"1e94b669-209f-45fa-8242-66f711f5454d\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " Jan 17 00:30:31.019117 kubelet[2744]: I0117 00:30:31.017564 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-xtables-lock\") pod \"1e94b669-209f-45fa-8242-66f711f5454d\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " Jan 17 00:30:31.019117 kubelet[2744]: I0117 00:30:31.017596 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e94b669-209f-45fa-8242-66f711f5454d-cilium-config-path\") pod \"1e94b669-209f-45fa-8242-66f711f5454d\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " Jan 17 00:30:31.019117 kubelet[2744]: I0117 00:30:31.017622 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-cni-path\") pod \"1e94b669-209f-45fa-8242-66f711f5454d\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " Jan 17 00:30:31.019117 kubelet[2744]: I0117 00:30:31.017656 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24b9047a-1695-449e-8eeb-5f48dc4d82ce-cilium-config-path\") pod \"24b9047a-1695-449e-8eeb-5f48dc4d82ce\" (UID: \"24b9047a-1695-449e-8eeb-5f48dc4d82ce\") " Jan 17 00:30:31.019447 kubelet[2744]: I0117 00:30:31.017685 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e94b669-209f-45fa-8242-66f711f5454d-hubble-tls\") pod \"1e94b669-209f-45fa-8242-66f711f5454d\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " Jan 17 00:30:31.019447 kubelet[2744]: I0117 00:30:31.017733 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-cilium-run\") pod \"1e94b669-209f-45fa-8242-66f711f5454d\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " Jan 17 00:30:31.019447 kubelet[2744]: I0117 00:30:31.017759 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-etc-cni-netd\") pod \"1e94b669-209f-45fa-8242-66f711f5454d\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " Jan 17 00:30:31.019447 kubelet[2744]: I0117 00:30:31.017784 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-lib-modules\") pod \"1e94b669-209f-45fa-8242-66f711f5454d\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " Jan 17 00:30:31.019447 kubelet[2744]: I0117 00:30:31.017811 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-host-proc-sys-kernel\") pod \"1e94b669-209f-45fa-8242-66f711f5454d\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " Jan 17 00:30:31.019447 kubelet[2744]: I0117 00:30:31.017838 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-host-proc-sys-net\") pod \"1e94b669-209f-45fa-8242-66f711f5454d\" (UID: \"1e94b669-209f-45fa-8242-66f711f5454d\") " Jan 17 00:30:31.019795 kubelet[2744]: I0117 00:30:31.017901 2744 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-hostproc\") on node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" DevicePath \"\"" Jan 17 00:30:31.019795 kubelet[2744]: I0117 00:30:31.017923 2744 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-bpf-maps\") on node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" DevicePath \"\"" Jan 17 00:30:31.019795 kubelet[2744]: I0117 00:30:31.017964 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1e94b669-209f-45fa-8242-66f711f5454d" (UID: "1e94b669-209f-45fa-8242-66f711f5454d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:31.021380 kubelet[2744]: I0117 00:30:31.020055 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1e94b669-209f-45fa-8242-66f711f5454d" (UID: "1e94b669-209f-45fa-8242-66f711f5454d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:31.021380 kubelet[2744]: I0117 00:30:31.021053 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1e94b669-209f-45fa-8242-66f711f5454d" (UID: "1e94b669-209f-45fa-8242-66f711f5454d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:31.021380 kubelet[2744]: I0117 00:30:31.021100 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1e94b669-209f-45fa-8242-66f711f5454d" (UID: "1e94b669-209f-45fa-8242-66f711f5454d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:31.021380 kubelet[2744]: I0117 00:30:31.021126 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1e94b669-209f-45fa-8242-66f711f5454d" (UID: "1e94b669-209f-45fa-8242-66f711f5454d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:31.021380 kubelet[2744]: I0117 00:30:31.021151 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1e94b669-209f-45fa-8242-66f711f5454d" (UID: "1e94b669-209f-45fa-8242-66f711f5454d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:31.021765 kubelet[2744]: I0117 00:30:31.021450 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1e94b669-209f-45fa-8242-66f711f5454d" (UID: "1e94b669-209f-45fa-8242-66f711f5454d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:31.024807 kubelet[2744]: I0117 00:30:31.024762 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-cni-path" (OuterVolumeSpecName: "cni-path") pod "1e94b669-209f-45fa-8242-66f711f5454d" (UID: "1e94b669-209f-45fa-8242-66f711f5454d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:31.031433 kubelet[2744]: I0117 00:30:31.031392 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b9047a-1695-449e-8eeb-5f48dc4d82ce-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "24b9047a-1695-449e-8eeb-5f48dc4d82ce" (UID: "24b9047a-1695-449e-8eeb-5f48dc4d82ce"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:30:31.033648 kubelet[2744]: I0117 00:30:31.033591 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e94b669-209f-45fa-8242-66f711f5454d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1e94b669-209f-45fa-8242-66f711f5454d" (UID: "1e94b669-209f-45fa-8242-66f711f5454d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:30:31.033964 kubelet[2744]: I0117 00:30:31.033931 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e94b669-209f-45fa-8242-66f711f5454d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1e94b669-209f-45fa-8242-66f711f5454d" (UID: "1e94b669-209f-45fa-8242-66f711f5454d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:30:31.034668 kubelet[2744]: I0117 00:30:31.034632 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e94b669-209f-45fa-8242-66f711f5454d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1e94b669-209f-45fa-8242-66f711f5454d" (UID: "1e94b669-209f-45fa-8242-66f711f5454d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:30:31.034978 kubelet[2744]: I0117 00:30:31.034950 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e94b669-209f-45fa-8242-66f711f5454d-kube-api-access-lh6q2" (OuterVolumeSpecName: "kube-api-access-lh6q2") pod "1e94b669-209f-45fa-8242-66f711f5454d" (UID: "1e94b669-209f-45fa-8242-66f711f5454d"). InnerVolumeSpecName "kube-api-access-lh6q2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:30:31.035310 kubelet[2744]: I0117 00:30:31.035266 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24b9047a-1695-449e-8eeb-5f48dc4d82ce-kube-api-access-fssz6" (OuterVolumeSpecName: "kube-api-access-fssz6") pod "24b9047a-1695-449e-8eeb-5f48dc4d82ce" (UID: "24b9047a-1695-449e-8eeb-5f48dc4d82ce"). InnerVolumeSpecName "kube-api-access-fssz6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:30:31.119059 kubelet[2744]: I0117 00:30:31.118898 2744 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lh6q2\" (UniqueName: \"kubernetes.io/projected/1e94b669-209f-45fa-8242-66f711f5454d-kube-api-access-lh6q2\") on node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" DevicePath \"\"" Jan 17 00:30:31.119059 kubelet[2744]: I0117 00:30:31.118947 2744 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e94b669-209f-45fa-8242-66f711f5454d-clustermesh-secrets\") on node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" DevicePath \"\"" Jan 17 00:30:31.119059 kubelet[2744]: I0117 00:30:31.118966 2744 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fssz6\" (UniqueName: \"kubernetes.io/projected/24b9047a-1695-449e-8eeb-5f48dc4d82ce-kube-api-access-fssz6\") on node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" DevicePath \"\"" Jan 17 00:30:31.119059 kubelet[2744]: I0117 00:30:31.118981 2744 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-xtables-lock\") on node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" DevicePath \"\"" Jan 17 00:30:31.119059 kubelet[2744]: I0117 00:30:31.118995 2744 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-cilium-cgroup\") on node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" DevicePath \"\"" Jan 17 00:30:31.119059 kubelet[2744]: I0117 00:30:31.119029 2744 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e94b669-209f-45fa-8242-66f711f5454d-cilium-config-path\") on node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" DevicePath \"\"" Jan 17 00:30:31.119059 kubelet[2744]: I0117 00:30:31.119048 2744 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-cni-path\") on node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" DevicePath \"\"" Jan 17 00:30:31.119553 kubelet[2744]: I0117 00:30:31.119062 2744 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e94b669-209f-45fa-8242-66f711f5454d-hubble-tls\") on node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" DevicePath \"\"" Jan 17 00:30:31.119553 kubelet[2744]: I0117 00:30:31.119078 2744 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24b9047a-1695-449e-8eeb-5f48dc4d82ce-cilium-config-path\") on node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" DevicePath \"\"" Jan 17 00:30:31.119553 kubelet[2744]: I0117 00:30:31.119093 2744 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-cilium-run\") on node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" DevicePath \"\"" Jan 17 00:30:31.119553 kubelet[2744]: I0117 00:30:31.119107 2744 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-etc-cni-netd\") on node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" DevicePath \"\"" Jan 17 00:30:31.119553 kubelet[2744]: I0117 00:30:31.119121 2744 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-lib-modules\") on node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" DevicePath \"\"" Jan 17 00:30:31.119553 kubelet[2744]: I0117 00:30:31.119135 2744 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-host-proc-sys-kernel\") on node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" DevicePath \"\"" Jan 17 00:30:31.119553 kubelet[2744]: I0117 00:30:31.119149 2744 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e94b669-209f-45fa-8242-66f711f5454d-host-proc-sys-net\") on node \"ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9\" DevicePath \"\"" Jan 17 00:30:31.589002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49-rootfs.mount: Deactivated successfully. Jan 17 00:30:31.589259 systemd[1]: var-lib-kubelet-pods-24b9047a\x2d1695\x2d449e\x2d8eeb\x2d5f48dc4d82ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfssz6.mount: Deactivated successfully. Jan 17 00:30:31.589461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd-rootfs.mount: Deactivated successfully. Jan 17 00:30:31.589630 systemd[1]: var-lib-kubelet-pods-1e94b669\x2d209f\x2d45fa\x2d8242\x2d66f711f5454d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlh6q2.mount: Deactivated successfully. Jan 17 00:30:31.589853 systemd[1]: var-lib-kubelet-pods-1e94b669\x2d209f\x2d45fa\x2d8242\x2d66f711f5454d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 00:30:31.590038 systemd[1]: var-lib-kubelet-pods-1e94b669\x2d209f\x2d45fa\x2d8242\x2d66f711f5454d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 00:30:31.688736 kubelet[2744]: I0117 00:30:31.687052 2744 scope.go:117] "RemoveContainer" containerID="b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c" Jan 17 00:30:31.693370 containerd[1590]: time="2026-01-17T00:30:31.693248981Z" level=info msg="RemoveContainer for \"b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c\"" Jan 17 00:30:31.706180 containerd[1590]: time="2026-01-17T00:30:31.706022876Z" level=info msg="RemoveContainer for \"b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c\" returns successfully" Jan 17 00:30:31.714567 kubelet[2744]: I0117 00:30:31.713828 2744 scope.go:117] "RemoveContainer" containerID="73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2" Jan 17 00:30:31.717633 containerd[1590]: time="2026-01-17T00:30:31.717426332Z" level=info msg="RemoveContainer for \"73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2\"" Jan 17 00:30:31.723062 containerd[1590]: time="2026-01-17T00:30:31.722996426Z" level=info msg="RemoveContainer for \"73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2\" returns successfully" Jan 17 00:30:31.723576 kubelet[2744]: I0117 00:30:31.723538 2744 scope.go:117] "RemoveContainer" containerID="fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49" Jan 17 00:30:31.725166 containerd[1590]: time="2026-01-17T00:30:31.725112596Z" level=info msg="RemoveContainer for \"fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49\"" Jan 17 00:30:31.730664 containerd[1590]: time="2026-01-17T00:30:31.730623577Z" level=info msg="RemoveContainer for \"fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49\" returns successfully" Jan 17 00:30:31.731540 kubelet[2744]: I0117 00:30:31.731502 2744 scope.go:117] "RemoveContainer" containerID="df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904" Jan 17 00:30:31.735008 containerd[1590]: time="2026-01-17T00:30:31.733946005Z" level=info msg="RemoveContainer for \"df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904\"" Jan 17 00:30:31.740479 containerd[1590]: time="2026-01-17T00:30:31.740433129Z" level=info msg="RemoveContainer for \"df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904\" returns successfully" Jan 17 00:30:31.742051 kubelet[2744]: I0117 00:30:31.742020 2744 scope.go:117] "RemoveContainer" containerID="aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff" Jan 17 00:30:31.744111 containerd[1590]: time="2026-01-17T00:30:31.744074344Z" level=info msg="RemoveContainer for \"aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff\"" Jan 17 00:30:31.751517 containerd[1590]: time="2026-01-17T00:30:31.751474438Z" level=info msg="RemoveContainer for \"aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff\" returns successfully" Jan 17 00:30:31.751916 kubelet[2744]: I0117 00:30:31.751861 2744 scope.go:117] "RemoveContainer" containerID="b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c" Jan 17 00:30:31.752365 kubelet[2744]: E0117 00:30:31.752332 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c\": not found" containerID="b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c" Jan 17 00:30:31.752462 containerd[1590]: time="2026-01-17T00:30:31.752143963Z" level=error msg="ContainerStatus for \"b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c\": not found" Jan 17 00:30:31.752533 kubelet[2744]: I0117 00:30:31.752374 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c"} err="failed to get container status \"b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1c1b2995af7c1ad37d250cc3c57fbb476a5fade73842a1e4617b8978064e98c\": not found" Jan 17 00:30:31.752533 kubelet[2744]: I0117 00:30:31.752488 2744 scope.go:117] "RemoveContainer" containerID="73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2" Jan 17 00:30:31.752785 containerd[1590]: time="2026-01-17T00:30:31.752742094Z" level=error msg="ContainerStatus for \"73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2\": not found" Jan 17 00:30:31.752933 kubelet[2744]: E0117 00:30:31.752903 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2\": not found" containerID="73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2" Jan 17 00:30:31.753025 kubelet[2744]: I0117 00:30:31.752944 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2"} err="failed to get container status \"73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"73a25c9954416325fed1ae0104074fdbd88e104ff1744ed9b90cbd77ff0673d2\": not found" Jan 17 00:30:31.753025 kubelet[2744]: I0117 00:30:31.752975 2744 scope.go:117] "RemoveContainer" containerID="fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49" Jan 17 00:30:31.753451 containerd[1590]: time="2026-01-17T00:30:31.753404781Z" level=error msg="ContainerStatus for \"fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49\": not found" Jan 17 00:30:31.753840 kubelet[2744]: E0117 00:30:31.753725 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49\": not found" containerID="fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49" Jan 17 00:30:31.753840 kubelet[2744]: I0117 00:30:31.753762 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49"} err="failed to get container status \"fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd0051a52b993d6b1e3d5ecc26286b059177b090c3befd98e9b41c8d86708a49\": not found" Jan 17 00:30:31.753840 kubelet[2744]: I0117 00:30:31.753789 2744 scope.go:117] "RemoveContainer" containerID="df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904" Jan 17 00:30:31.754319 containerd[1590]: time="2026-01-17T00:30:31.754272102Z" level=error msg="ContainerStatus for \"df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904\": not found" Jan 17 00:30:31.754482 kubelet[2744]: E0117 00:30:31.754455 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904\": not found" containerID="df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904" Jan 17 00:30:31.754575 kubelet[2744]: I0117 00:30:31.754491 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904"} err="failed to get container status \"df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904\": rpc error: code = NotFound desc = an error occurred when try to find container \"df3d651106850e2b42558b76d7264fe7de5417cd81aad8276bc7457377191904\": not found" Jan 17 00:30:31.754575 kubelet[2744]: I0117 00:30:31.754515 2744 scope.go:117] "RemoveContainer" containerID="aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff" Jan 17 00:30:31.755127 kubelet[2744]: E0117 00:30:31.754957 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff\": not found" containerID="aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff" Jan 17 00:30:31.755127 kubelet[2744]: I0117 00:30:31.755000 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff"} err="failed to get container status \"aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff\": not found" Jan 17 00:30:31.755127 kubelet[2744]: I0117 00:30:31.755023 2744 scope.go:117] "RemoveContainer" containerID="9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583" Jan 17 00:30:31.755266 containerd[1590]: time="2026-01-17T00:30:31.754787455Z" level=error msg="ContainerStatus for \"aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa25cf583c4e54a7ec03b60d1851abec3cae5b2384727e536718e88fe93c7dff\": not found" Jan 17 00:30:31.756463 containerd[1590]: time="2026-01-17T00:30:31.756430450Z" level=info msg="RemoveContainer for \"9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583\"" Jan 17 00:30:31.761321 containerd[1590]: time="2026-01-17T00:30:31.761284856Z" level=info msg="RemoveContainer for \"9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583\" returns successfully" Jan 17 00:30:31.761594 kubelet[2744]: I0117 00:30:31.761553 2744 scope.go:117] "RemoveContainer" containerID="9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583" Jan 17 00:30:31.762080 containerd[1590]: time="2026-01-17T00:30:31.761964008Z" level=error msg="ContainerStatus for \"9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583\": not found" Jan 17 00:30:31.762256 kubelet[2744]: E0117 00:30:31.762215 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583\": not found" containerID="9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583" Jan 17 00:30:31.762362 kubelet[2744]: I0117 00:30:31.762263 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583"} err="failed to get container status \"9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e8d0c5cd3e88a5bc5f93ff1a85e3d698629eed1ea016b27eac22e4b5b32a583\": not found" Jan 17 00:30:32.219727 kubelet[2744]: I0117 00:30:32.219648 2744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e94b669-209f-45fa-8242-66f711f5454d" path="/var/lib/kubelet/pods/1e94b669-209f-45fa-8242-66f711f5454d/volumes" Jan 17 00:30:32.220725 kubelet[2744]: I0117 00:30:32.220654 2744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b9047a-1695-449e-8eeb-5f48dc4d82ce" path="/var/lib/kubelet/pods/24b9047a-1695-449e-8eeb-5f48dc4d82ce/volumes" Jan 17 00:30:32.537801 sshd[4381]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:32.543148 systemd[1]: sshd@24-10.128.0.62:22-4.153.228.146:44448.service: Deactivated successfully. Jan 17 00:30:32.550053 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:30:32.552006 systemd-logind[1574]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:30:32.553690 systemd-logind[1574]: Removed session 25. Jan 17 00:30:32.575584 systemd[1]: Started sshd@25-10.128.0.62:22-4.153.228.146:44458.service - OpenSSH per-connection server daemon (4.153.228.146:44458). Jan 17 00:30:32.797930 sshd[4550]: Accepted publickey for core from 4.153.228.146 port 44458 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:30:32.800062 sshd[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:32.806928 systemd-logind[1574]: New session 26 of user core. Jan 17 00:30:32.814095 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:30:33.279737 ntpd[1537]: Deleting interface #10 lxc_health, fe80::30b3:2ff:fe63:af7%8#123, interface stats: received=0, sent=0, dropped=0, active_time=91 secs Jan 17 00:30:33.280388 ntpd[1537]: 17 Jan 00:30:33 ntpd[1537]: Deleting interface #10 lxc_health, fe80::30b3:2ff:fe63:af7%8#123, interface stats: received=0, sent=0, dropped=0, active_time=91 secs Jan 17 00:30:33.863204 sshd[4550]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:33.878273 kubelet[2744]: I0117 00:30:33.871428 2744 memory_manager.go:355] "RemoveStaleState removing state" podUID="1e94b669-209f-45fa-8242-66f711f5454d" containerName="cilium-agent" Jan 17 00:30:33.878273 kubelet[2744]: I0117 00:30:33.871468 2744 memory_manager.go:355] "RemoveStaleState removing state" podUID="24b9047a-1695-449e-8eeb-5f48dc4d82ce" containerName="cilium-operator" Jan 17 00:30:33.877431 systemd[1]: sshd@25-10.128.0.62:22-4.153.228.146:44458.service: Deactivated successfully. Jan 17 00:30:33.893417 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:30:33.905813 systemd-logind[1574]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:30:33.925168 systemd[1]: Started sshd@26-10.128.0.62:22-4.153.228.146:44460.service - OpenSSH per-connection server daemon (4.153.228.146:44460). Jan 17 00:30:33.934976 systemd-logind[1574]: Removed session 26. Jan 17 00:30:34.039636 kubelet[2744]: I0117 00:30:34.039541 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d521d5b-9f12-455b-9e74-fcbd1e300723-host-proc-sys-net\") pod \"cilium-pc264\" (UID: \"4d521d5b-9f12-455b-9e74-fcbd1e300723\") " pod="kube-system/cilium-pc264" Jan 17 00:30:34.039636 kubelet[2744]: I0117 00:30:34.039597 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d521d5b-9f12-455b-9e74-fcbd1e300723-hostproc\") pod \"cilium-pc264\" (UID: \"4d521d5b-9f12-455b-9e74-fcbd1e300723\") " pod="kube-system/cilium-pc264" Jan 17 00:30:34.040131 kubelet[2744]: I0117 00:30:34.039657 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d521d5b-9f12-455b-9e74-fcbd1e300723-bpf-maps\") pod \"cilium-pc264\" (UID: \"4d521d5b-9f12-455b-9e74-fcbd1e300723\") " pod="kube-system/cilium-pc264" Jan 17 00:30:34.040131 kubelet[2744]: I0117 00:30:34.039684 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d521d5b-9f12-455b-9e74-fcbd1e300723-cilium-cgroup\") pod \"cilium-pc264\" (UID: \"4d521d5b-9f12-455b-9e74-fcbd1e300723\") " pod="kube-system/cilium-pc264" Jan 17 00:30:34.040131 kubelet[2744]: I0117 00:30:34.039736 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d521d5b-9f12-455b-9e74-fcbd1e300723-lib-modules\") pod \"cilium-pc264\" (UID: \"4d521d5b-9f12-455b-9e74-fcbd1e300723\") " pod="kube-system/cilium-pc264" Jan 17 00:30:34.040131 kubelet[2744]: I0117 00:30:34.039767 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d521d5b-9f12-455b-9e74-fcbd1e300723-clustermesh-secrets\") pod \"cilium-pc264\" (UID: \"4d521d5b-9f12-455b-9e74-fcbd1e300723\") " pod="kube-system/cilium-pc264" Jan 17 00:30:34.040131 kubelet[2744]: I0117 00:30:34.039811 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d521d5b-9f12-455b-9e74-fcbd1e300723-cilium-run\") pod \"cilium-pc264\" (UID: \"4d521d5b-9f12-455b-9e74-fcbd1e300723\") " pod="kube-system/cilium-pc264" Jan 17 00:30:34.040131 kubelet[2744]: I0117 00:30:34.039837 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4d521d5b-9f12-455b-9e74-fcbd1e300723-cilium-ipsec-secrets\") pod \"cilium-pc264\" (UID: \"4d521d5b-9f12-455b-9e74-fcbd1e300723\") " pod="kube-system/cilium-pc264" Jan 17 00:30:34.040506 kubelet[2744]: I0117 00:30:34.039865 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d521d5b-9f12-455b-9e74-fcbd1e300723-host-proc-sys-kernel\") pod \"cilium-pc264\" (UID: \"4d521d5b-9f12-455b-9e74-fcbd1e300723\") " pod="kube-system/cilium-pc264" Jan 17 00:30:34.040506 kubelet[2744]: I0117 00:30:34.039913 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d521d5b-9f12-455b-9e74-fcbd1e300723-xtables-lock\") pod \"cilium-pc264\" (UID: \"4d521d5b-9f12-455b-9e74-fcbd1e300723\") " pod="kube-system/cilium-pc264" Jan 17 00:30:34.040506 kubelet[2744]: I0117 00:30:34.039942 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d521d5b-9f12-455b-9e74-fcbd1e300723-cilium-config-path\") pod \"cilium-pc264\" (UID: \"4d521d5b-9f12-455b-9e74-fcbd1e300723\") " pod="kube-system/cilium-pc264" Jan 17 00:30:34.040506 kubelet[2744]: I0117 00:30:34.039969 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d521d5b-9f12-455b-9e74-fcbd1e300723-hubble-tls\") pod \"cilium-pc264\" (UID: \"4d521d5b-9f12-455b-9e74-fcbd1e300723\") " pod="kube-system/cilium-pc264" Jan 17 00:30:34.040506 kubelet[2744]: I0117 00:30:34.040006 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d521d5b-9f12-455b-9e74-fcbd1e300723-etc-cni-netd\") pod \"cilium-pc264\" (UID: \"4d521d5b-9f12-455b-9e74-fcbd1e300723\") " pod="kube-system/cilium-pc264" Jan 17 00:30:34.040506 kubelet[2744]: I0117 00:30:34.040034 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d521d5b-9f12-455b-9e74-fcbd1e300723-cni-path\") pod \"cilium-pc264\" (UID: \"4d521d5b-9f12-455b-9e74-fcbd1e300723\") " pod="kube-system/cilium-pc264" Jan 17 00:30:34.040865 kubelet[2744]: I0117 00:30:34.040077 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvfqt\" (UniqueName: \"kubernetes.io/projected/4d521d5b-9f12-455b-9e74-fcbd1e300723-kube-api-access-wvfqt\") pod \"cilium-pc264\" (UID: \"4d521d5b-9f12-455b-9e74-fcbd1e300723\") " pod="kube-system/cilium-pc264" Jan 17 00:30:34.202822 sshd[4563]: Accepted publickey for core from 4.153.228.146 port 44460 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:30:34.210338 sshd[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:34.221498 systemd-logind[1574]: New session 27 of user core. Jan 17 00:30:34.224305 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:30:34.285932 containerd[1590]: time="2026-01-17T00:30:34.285879296Z" level=info msg="StopPodSandbox for \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\"" Jan 17 00:30:34.286569 containerd[1590]: time="2026-01-17T00:30:34.286011088Z" level=info msg="TearDown network for sandbox \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\" successfully" Jan 17 00:30:34.286569 containerd[1590]: time="2026-01-17T00:30:34.286030413Z" level=info msg="StopPodSandbox for \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\" returns successfully" Jan 17 00:30:34.286735 containerd[1590]: time="2026-01-17T00:30:34.286662203Z" level=info msg="RemovePodSandbox for \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\"" Jan 17 00:30:34.286735 containerd[1590]: time="2026-01-17T00:30:34.286717354Z" level=info msg="Forcibly stopping sandbox \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\"" Jan 17 00:30:34.286878 containerd[1590]: time="2026-01-17T00:30:34.286796272Z" level=info msg="TearDown network for sandbox \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\" successfully" Jan 17 00:30:34.291762 containerd[1590]: time="2026-01-17T00:30:34.291675465Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:30:34.291907 containerd[1590]: time="2026-01-17T00:30:34.291781708Z" level=info msg="RemovePodSandbox \"386de9bd88c8c76842ec5aa45658ca150a6b664074fc143c52300d989f8dd7cd\" returns successfully" Jan 17 00:30:34.292376 containerd[1590]: time="2026-01-17T00:30:34.292328508Z" level=info msg="StopPodSandbox for \"e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49\"" Jan 17 00:30:34.292504 containerd[1590]: time="2026-01-17T00:30:34.292432602Z" level=info msg="TearDown network for sandbox \"e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49\" successfully" Jan 17 00:30:34.292504 containerd[1590]: time="2026-01-17T00:30:34.292453850Z" level=info msg="StopPodSandbox for \"e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49\" returns successfully" Jan 17 00:30:34.292960 containerd[1590]: time="2026-01-17T00:30:34.292930442Z" level=info msg="RemovePodSandbox for \"e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49\"" Jan 17 00:30:34.293056 containerd[1590]: time="2026-01-17T00:30:34.292963967Z" level=info msg="Forcibly stopping sandbox \"e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49\"" Jan 17 00:30:34.293056 containerd[1590]: time="2026-01-17T00:30:34.293040708Z" level=info msg="TearDown network for sandbox \"e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49\" successfully" Jan 17 00:30:34.296807 containerd[1590]: time="2026-01-17T00:30:34.296755796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:30:34.296970 containerd[1590]: time="2026-01-17T00:30:34.296810388Z" level=info msg="RemovePodSandbox \"e7e550c0adc089da33c9e116844c0fe16dc7357c632141610d439b247305cd49\" returns successfully" Jan 17 00:30:34.379962 sshd[4563]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:34.385650 systemd[1]: sshd@26-10.128.0.62:22-4.153.228.146:44460.service: Deactivated successfully. Jan 17 00:30:34.391653 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:30:34.393369 systemd-logind[1574]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:30:34.395256 systemd-logind[1574]: Removed session 27. Jan 17 00:30:34.419382 systemd[1]: Started sshd@27-10.128.0.62:22-4.153.228.146:52478.service - OpenSSH per-connection server daemon (4.153.228.146:52478). Jan 17 00:30:34.467460 kubelet[2744]: E0117 00:30:34.467194 2744 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:30:34.509109 containerd[1590]: time="2026-01-17T00:30:34.509057552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pc264,Uid:4d521d5b-9f12-455b-9e74-fcbd1e300723,Namespace:kube-system,Attempt:0,}" Jan 17 00:30:34.545771 containerd[1590]: time="2026-01-17T00:30:34.545603844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:30:34.545771 containerd[1590]: time="2026-01-17T00:30:34.545681024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:30:34.546630 containerd[1590]: time="2026-01-17T00:30:34.545740670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:34.546630 containerd[1590]: time="2026-01-17T00:30:34.545917514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:34.608255 containerd[1590]: time="2026-01-17T00:30:34.608125355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pc264,Uid:4d521d5b-9f12-455b-9e74-fcbd1e300723,Namespace:kube-system,Attempt:0,} returns sandbox id \"63ec8e52452a319cc2748c7b2cceea62f9adb4d4f0ff93ec302ebc205f66f505\"" Jan 17 00:30:34.612367 containerd[1590]: time="2026-01-17T00:30:34.612299260Z" level=info msg="CreateContainer within sandbox \"63ec8e52452a319cc2748c7b2cceea62f9adb4d4f0ff93ec302ebc205f66f505\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:30:34.626610 containerd[1590]: time="2026-01-17T00:30:34.626557304Z" level=info msg="CreateContainer within sandbox \"63ec8e52452a319cc2748c7b2cceea62f9adb4d4f0ff93ec302ebc205f66f505\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8607e4bf8bcb9315c18b5629c7ca5ccb88bdf56bbc3937119c7ab06593efc4e1\"" Jan 17 00:30:34.627644 containerd[1590]: time="2026-01-17T00:30:34.627585338Z" level=info msg="StartContainer for \"8607e4bf8bcb9315c18b5629c7ca5ccb88bdf56bbc3937119c7ab06593efc4e1\"" Jan 17 00:30:34.647915 sshd[4579]: Accepted publickey for core from 4.153.228.146 port 52478 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:30:34.650666 sshd[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:34.661996 systemd-logind[1574]: New session 28 of user core. Jan 17 00:30:34.674228 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 00:30:34.723211 containerd[1590]: time="2026-01-17T00:30:34.723080657Z" level=info msg="StartContainer for \"8607e4bf8bcb9315c18b5629c7ca5ccb88bdf56bbc3937119c7ab06593efc4e1\" returns successfully" Jan 17 00:30:34.774615 containerd[1590]: time="2026-01-17T00:30:34.774422777Z" level=info msg="shim disconnected" id=8607e4bf8bcb9315c18b5629c7ca5ccb88bdf56bbc3937119c7ab06593efc4e1 namespace=k8s.io Jan 17 00:30:34.774615 containerd[1590]: time="2026-01-17T00:30:34.774613941Z" level=warning msg="cleaning up after shim disconnected" id=8607e4bf8bcb9315c18b5629c7ca5ccb88bdf56bbc3937119c7ab06593efc4e1 namespace=k8s.io Jan 17 00:30:34.775351 containerd[1590]: time="2026-01-17T00:30:34.774629871Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:35.720400 containerd[1590]: time="2026-01-17T00:30:35.720343663Z" level=info msg="CreateContainer within sandbox \"63ec8e52452a319cc2748c7b2cceea62f9adb4d4f0ff93ec302ebc205f66f505\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:30:35.749173 containerd[1590]: time="2026-01-17T00:30:35.746901786Z" level=info msg="CreateContainer within sandbox \"63ec8e52452a319cc2748c7b2cceea62f9adb4d4f0ff93ec302ebc205f66f505\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"426d2b874a6d38a40d2e57aa9175378b5e26aba801376fb26976f575a1e8cc14\"" Jan 17 00:30:35.750002 containerd[1590]: time="2026-01-17T00:30:35.749941653Z" level=info msg="StartContainer for \"426d2b874a6d38a40d2e57aa9175378b5e26aba801376fb26976f575a1e8cc14\"" Jan 17 00:30:35.847752 containerd[1590]: time="2026-01-17T00:30:35.847676154Z" level=info msg="StartContainer for \"426d2b874a6d38a40d2e57aa9175378b5e26aba801376fb26976f575a1e8cc14\" returns successfully" Jan 17 00:30:35.897794 containerd[1590]: time="2026-01-17T00:30:35.897652236Z" level=info msg="shim disconnected" id=426d2b874a6d38a40d2e57aa9175378b5e26aba801376fb26976f575a1e8cc14 namespace=k8s.io Jan 17 00:30:35.897794 containerd[1590]: time="2026-01-17T00:30:35.897777913Z" level=warning msg="cleaning up after shim disconnected" id=426d2b874a6d38a40d2e57aa9175378b5e26aba801376fb26976f575a1e8cc14 namespace=k8s.io Jan 17 00:30:35.898398 containerd[1590]: time="2026-01-17T00:30:35.897794608Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:36.164182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-426d2b874a6d38a40d2e57aa9175378b5e26aba801376fb26976f575a1e8cc14-rootfs.mount: Deactivated successfully. Jan 17 00:30:36.726874 containerd[1590]: time="2026-01-17T00:30:36.726573954Z" level=info msg="CreateContainer within sandbox \"63ec8e52452a319cc2748c7b2cceea62f9adb4d4f0ff93ec302ebc205f66f505\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:30:36.756963 containerd[1590]: time="2026-01-17T00:30:36.755670640Z" level=info msg="CreateContainer within sandbox \"63ec8e52452a319cc2748c7b2cceea62f9adb4d4f0ff93ec302ebc205f66f505\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d85de9e5c9f8b81cbf0a0c7b39db6ac993b01cb5181278d0621ba530bc8f192d\"" Jan 17 00:30:36.757512 containerd[1590]: time="2026-01-17T00:30:36.757375003Z" level=info msg="StartContainer for \"d85de9e5c9f8b81cbf0a0c7b39db6ac993b01cb5181278d0621ba530bc8f192d\"" Jan 17 00:30:36.850343 containerd[1590]: time="2026-01-17T00:30:36.849610899Z" level=info msg="StartContainer for \"d85de9e5c9f8b81cbf0a0c7b39db6ac993b01cb5181278d0621ba530bc8f192d\" returns successfully" Jan 17 00:30:36.894891 containerd[1590]: time="2026-01-17T00:30:36.894569414Z" level=info msg="shim disconnected" id=d85de9e5c9f8b81cbf0a0c7b39db6ac993b01cb5181278d0621ba530bc8f192d namespace=k8s.io Jan 17 00:30:36.894891 containerd[1590]: time="2026-01-17T00:30:36.894642501Z" level=warning msg="cleaning up after shim disconnected" id=d85de9e5c9f8b81cbf0a0c7b39db6ac993b01cb5181278d0621ba530bc8f192d namespace=k8s.io Jan 17 00:30:36.894891 containerd[1590]: time="2026-01-17T00:30:36.894658218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:37.164643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d85de9e5c9f8b81cbf0a0c7b39db6ac993b01cb5181278d0621ba530bc8f192d-rootfs.mount: Deactivated successfully. Jan 17 00:30:37.426105 kubelet[2744]: I0117 00:30:37.425927 2744 setters.go:602] "Node became not ready" node="ci-4081-3-6-nightly-20260116-2100-0c4e8d1d2810f7a893d9" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-17T00:30:37Z","lastTransitionTime":"2026-01-17T00:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 00:30:37.731030 containerd[1590]: time="2026-01-17T00:30:37.730943282Z" level=info msg="CreateContainer within sandbox \"63ec8e52452a319cc2748c7b2cceea62f9adb4d4f0ff93ec302ebc205f66f505\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:30:37.757866 containerd[1590]: time="2026-01-17T00:30:37.757795408Z" level=info msg="CreateContainer within sandbox \"63ec8e52452a319cc2748c7b2cceea62f9adb4d4f0ff93ec302ebc205f66f505\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"da470a153c6e6d13d7027e4c105617021fed3a9ac763c0877aa17f298b4d938a\"" Jan 17 00:30:37.759491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1669063995.mount: Deactivated successfully. Jan 17 00:30:37.764856 containerd[1590]: time="2026-01-17T00:30:37.764816222Z" level=info msg="StartContainer for \"da470a153c6e6d13d7027e4c105617021fed3a9ac763c0877aa17f298b4d938a\"" Jan 17 00:30:37.856204 containerd[1590]: time="2026-01-17T00:30:37.856052947Z" level=info msg="StartContainer for \"da470a153c6e6d13d7027e4c105617021fed3a9ac763c0877aa17f298b4d938a\" returns successfully" Jan 17 00:30:37.894399 containerd[1590]: time="2026-01-17T00:30:37.894085699Z" level=info msg="shim disconnected" id=da470a153c6e6d13d7027e4c105617021fed3a9ac763c0877aa17f298b4d938a namespace=k8s.io Jan 17 00:30:37.894399 containerd[1590]: time="2026-01-17T00:30:37.894158945Z" level=warning msg="cleaning up after shim disconnected" id=da470a153c6e6d13d7027e4c105617021fed3a9ac763c0877aa17f298b4d938a namespace=k8s.io Jan 17 00:30:37.894399 containerd[1590]: time="2026-01-17T00:30:37.894173773Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:38.165545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da470a153c6e6d13d7027e4c105617021fed3a9ac763c0877aa17f298b4d938a-rootfs.mount: Deactivated successfully. Jan 17 00:30:38.736245 containerd[1590]: time="2026-01-17T00:30:38.736194824Z" level=info msg="CreateContainer within sandbox \"63ec8e52452a319cc2748c7b2cceea62f9adb4d4f0ff93ec302ebc205f66f505\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:30:38.761125 containerd[1590]: time="2026-01-17T00:30:38.761066445Z" level=info msg="CreateContainer within sandbox \"63ec8e52452a319cc2748c7b2cceea62f9adb4d4f0ff93ec302ebc205f66f505\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b50fb8c5f7d62945b8f42a95cd02de8fbcc52be338bc095c568286f758bc3aca\"" Jan 17 00:30:38.763623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4031215394.mount: Deactivated successfully. Jan 17 00:30:38.765862 containerd[1590]: time="2026-01-17T00:30:38.765672972Z" level=info msg="StartContainer for \"b50fb8c5f7d62945b8f42a95cd02de8fbcc52be338bc095c568286f758bc3aca\"" Jan 17 00:30:38.858133 containerd[1590]: time="2026-01-17T00:30:38.857871841Z" level=info msg="StartContainer for \"b50fb8c5f7d62945b8f42a95cd02de8fbcc52be338bc095c568286f758bc3aca\" returns successfully" Jan 17 00:30:39.425770 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 00:30:39.761961 kubelet[2744]: I0117 00:30:39.761868 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pc264" podStartSLOduration=6.761840849 podStartE2EDuration="6.761840849s" podCreationTimestamp="2026-01-17 00:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:30:39.760922139 +0000 UTC m=+125.753669847" watchObservedRunningTime="2026-01-17 00:30:39.761840849 +0000 UTC m=+125.754588557" Jan 17 00:30:42.816321 systemd-networkd[1221]: lxc_health: Link UP Jan 17 00:30:42.827100 systemd-networkd[1221]: lxc_health: Gained carrier Jan 17 00:30:44.179040 systemd-networkd[1221]: lxc_health: Gained IPv6LL Jan 17 00:30:46.279695 ntpd[1537]: Listen normally on 13 lxc_health [fe80::a0dd:86ff:fe52:578a%14]:123 Jan 17 00:30:46.280495 ntpd[1537]: 17 Jan 00:30:46 ntpd[1537]: Listen normally on 13 lxc_health [fe80::a0dd:86ff:fe52:578a%14]:123 Jan 17 00:30:47.943653 systemd[1]: run-containerd-runc-k8s.io-b50fb8c5f7d62945b8f42a95cd02de8fbcc52be338bc095c568286f758bc3aca-runc.pDkmbW.mount: Deactivated successfully. Jan 17 00:30:48.166047 sshd[4579]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:48.176555 systemd[1]: sshd@27-10.128.0.62:22-4.153.228.146:52478.service: Deactivated successfully. Jan 17 00:30:48.182841 systemd-logind[1574]: Session 28 logged out. Waiting for processes to exit. Jan 17 00:30:48.183116 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 00:30:48.188110 systemd-logind[1574]: Removed session 28.