Jan 17 00:21:10.150880 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:21:10.150942 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:21:10.150962 kernel: BIOS-provided physical RAM map: Jan 17 00:21:10.150995 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 17 00:21:10.151007 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 17 00:21:10.151019 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 17 00:21:10.151033 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 17 00:21:10.151051 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 17 00:21:10.151064 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 17 00:21:10.151077 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 17 00:21:10.151091 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 17 00:21:10.151103 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 17 00:21:10.151116 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 17 00:21:10.151130 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 17 00:21:10.151151 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 17 00:21:10.151165 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 17 00:21:10.151180 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 17 00:21:10.151194 kernel: NX (Execute Disable) protection: active Jan 17 00:21:10.151209 kernel: APIC: Static calls initialized Jan 17 00:21:10.151223 kernel: efi: EFI v2.7 by EDK II Jan 17 00:21:10.151238 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Jan 17 00:21:10.151265 kernel: SMBIOS 2.4 present. Jan 17 00:21:10.151280 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Jan 17 00:21:10.151295 kernel: Hypervisor detected: KVM Jan 17 00:21:10.151315 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:21:10.151331 kernel: kvm-clock: using sched offset of 13353509704 cycles Jan 17 00:21:10.151348 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:21:10.151364 kernel: tsc: Detected 2299.998 MHz processor Jan 17 00:21:10.151380 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:21:10.151396 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:21:10.151412 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 17 00:21:10.151428 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 17 00:21:10.151443 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:21:10.151463 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 17 00:21:10.151480 kernel: Using GB pages for direct mapping Jan 17 00:21:10.151495 kernel: Secure boot disabled Jan 17 00:21:10.151511 kernel: ACPI: Early table checksum verification disabled Jan 17 00:21:10.151527 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 17 00:21:10.151543 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 17 00:21:10.151559 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 17 00:21:10.151583 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 17 00:21:10.151605 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 17 00:21:10.151622 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Jan 17 00:21:10.151639 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 17 00:21:10.151656 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 17 00:21:10.151674 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 17 00:21:10.151691 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 17 00:21:10.151712 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 17 00:21:10.151730 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 17 00:21:10.151747 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 17 00:21:10.151764 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 17 00:21:10.151782 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 17 00:21:10.151799 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 17 00:21:10.151816 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 17 00:21:10.151833 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 17 00:21:10.151850 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 17 00:21:10.151872 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 17 00:21:10.151890 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:21:10.151907 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:21:10.151925 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 00:21:10.151941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 17 00:21:10.151959 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 17 00:21:10.152026 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 17 00:21:10.152042 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 17 00:21:10.152061 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 17 00:21:10.152087 kernel: Zone ranges: Jan 17 00:21:10.152106 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:21:10.152126 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 00:21:10.152145 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 17 00:21:10.152165 kernel: Movable zone start for each node Jan 17 00:21:10.152185 kernel: Early memory node ranges Jan 17 00:21:10.152205 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 17 00:21:10.152224 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 17 00:21:10.152257 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 17 00:21:10.152283 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 17 00:21:10.152301 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 17 00:21:10.152320 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 17 00:21:10.152338 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:21:10.152358 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 17 00:21:10.152377 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 17 00:21:10.152394 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 00:21:10.152412 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 17 00:21:10.152428 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 00:21:10.152446 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:21:10.152467 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:21:10.152484 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:21:10.152501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:21:10.152518 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:21:10.152537 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:21:10.152555 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:21:10.152571 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:21:10.152587 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:21:10.152608 kernel: Booting paravirtualized kernel on KVM Jan 17 00:21:10.152625 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:21:10.152642 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:21:10.152660 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:21:10.152678 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:21:10.152695 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:21:10.152712 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:21:10.152729 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:21:10.152748 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:21:10.152771 kernel: random: crng init done Jan 17 00:21:10.152788 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 00:21:10.152806 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:21:10.152823 kernel: Fallback order for Node 0: 0 Jan 17 00:21:10.152841 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 17 00:21:10.152858 kernel: Policy zone: Normal Jan 17 00:21:10.152876 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:21:10.152893 kernel: software IO TLB: area num 2. Jan 17 00:21:10.152911 kernel: Memory: 7513176K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 347148K reserved, 0K cma-reserved) Jan 17 00:21:10.152935 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:21:10.152953 kernel: Kernel/User page tables isolation: enabled Jan 17 00:21:10.153033 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:21:10.153049 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:21:10.153066 kernel: Dynamic Preempt: voluntary Jan 17 00:21:10.153084 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:21:10.153103 kernel: rcu: RCU event tracing is enabled. Jan 17 00:21:10.153119 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:21:10.153159 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:21:10.153179 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:21:10.153199 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:21:10.153223 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:21:10.153253 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:21:10.153273 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:21:10.153293 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:21:10.153314 kernel: Console: colour dummy device 80x25 Jan 17 00:21:10.153338 kernel: printk: console [ttyS0] enabled Jan 17 00:21:10.153359 kernel: ACPI: Core revision 20230628 Jan 17 00:21:10.153379 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:21:10.153399 kernel: x2apic enabled Jan 17 00:21:10.153419 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:21:10.153440 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 17 00:21:10.153460 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 00:21:10.153481 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 17 00:21:10.153500 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 17 00:21:10.153524 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 17 00:21:10.153544 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:21:10.153563 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 00:21:10.153583 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 00:21:10.153604 kernel: Spectre V2 : Mitigation: IBRS Jan 17 00:21:10.153623 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:21:10.153644 kernel: RETBleed: Mitigation: IBRS Jan 17 00:21:10.153661 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:21:10.153680 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 17 00:21:10.153704 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:21:10.153724 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 00:21:10.153744 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:21:10.153763 kernel: active return thunk: its_return_thunk Jan 17 00:21:10.153783 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:21:10.153803 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:21:10.153824 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:21:10.153844 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:21:10.153865 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:21:10.153892 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 00:21:10.153912 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:21:10.153932 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:21:10.153953 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:21:10.154003 kernel: landlock: Up and running. Jan 17 00:21:10.154024 kernel: SELinux: Initializing. Jan 17 00:21:10.154044 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:21:10.154065 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:21:10.154085 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 17 00:21:10.154111 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:21:10.154132 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:21:10.154152 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:21:10.154173 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 17 00:21:10.154194 kernel: signal: max sigframe size: 1776 Jan 17 00:21:10.154215 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:21:10.154236 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:21:10.154265 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:21:10.154285 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:21:10.154311 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:21:10.154331 kernel: .... node #0, CPUs: #1 Jan 17 00:21:10.154354 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 00:21:10.154375 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:21:10.154396 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:21:10.154416 kernel: smpboot: Max logical packages: 1 Jan 17 00:21:10.154436 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 17 00:21:10.154457 kernel: devtmpfs: initialized Jan 17 00:21:10.154482 kernel: x86/mm: Memory block size: 128MB Jan 17 00:21:10.154502 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 17 00:21:10.154523 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:21:10.154544 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:21:10.154565 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:21:10.154585 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:21:10.154605 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:21:10.154625 kernel: audit: type=2000 audit(1768609268.636:1): state=initialized audit_enabled=0 res=1 Jan 17 00:21:10.154645 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:21:10.154671 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:21:10.154692 kernel: cpuidle: using governor menu Jan 17 00:21:10.154712 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:21:10.154732 kernel: dca service started, version 1.12.1 Jan 17 00:21:10.154753 kernel: PCI: Using configuration type 1 for base access Jan 17 00:21:10.154773 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:21:10.154794 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:21:10.154814 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:21:10.154835 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:21:10.154860 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:21:10.154880 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:21:10.154900 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:21:10.154920 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:21:10.154940 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 00:21:10.154976 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:21:10.154995 kernel: ACPI: Interpreter enabled Jan 17 00:21:10.155009 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:21:10.155024 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:21:10.155046 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:21:10.155061 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 00:21:10.155079 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 00:21:10.155095 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:21:10.155433 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:21:10.155651 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:21:10.155844 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:21:10.155878 kernel: PCI host bridge to bus 0000:00 Jan 17 00:21:10.156129 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:21:10.156326 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:21:10.156498 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:21:10.156668 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 17 00:21:10.156841 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:21:10.157084 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:21:10.157346 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 17 00:21:10.157574 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 00:21:10.157795 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 00:21:10.158046 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 17 00:21:10.158266 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 17 00:21:10.158467 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 17 00:21:10.158689 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:21:10.158897 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 17 00:21:10.159159 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 17 00:21:10.159385 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:21:10.159587 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 17 00:21:10.159786 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 17 00:21:10.159813 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:21:10.159844 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:21:10.159863 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:21:10.159882 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:21:10.159901 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:21:10.159920 kernel: iommu: Default domain type: Translated Jan 17 00:21:10.159939 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:21:10.159958 kernel: efivars: Registered efivars operations Jan 17 00:21:10.159997 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:21:10.160017 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:21:10.160042 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 17 00:21:10.160062 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 17 00:21:10.160080 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 17 00:21:10.160098 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 17 00:21:10.160116 kernel: vgaarb: loaded Jan 17 00:21:10.160135 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:21:10.160154 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:21:10.160173 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:21:10.160192 kernel: pnp: PnP ACPI init Jan 17 00:21:10.160215 kernel: pnp: PnP ACPI: found 7 devices Jan 17 00:21:10.160236 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:21:10.160264 kernel: NET: Registered PF_INET protocol family Jan 17 00:21:10.160284 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:21:10.160303 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 00:21:10.160323 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:21:10.160342 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:21:10.160361 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 00:21:10.160380 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 00:21:10.160403 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:21:10.160422 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:21:10.160442 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:21:10.160461 kernel: NET: Registered PF_XDP protocol family Jan 17 00:21:10.160671 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:21:10.160848 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:21:10.161047 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:21:10.161221 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 17 00:21:10.161478 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:21:10.161505 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:21:10.161525 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 00:21:10.161543 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 17 00:21:10.161564 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:21:10.161585 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 00:21:10.161606 kernel: clocksource: Switched to clocksource tsc Jan 17 00:21:10.161627 kernel: Initialise system trusted keyrings Jan 17 00:21:10.161654 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 00:21:10.161675 kernel: Key type asymmetric registered Jan 17 00:21:10.161694 kernel: Asymmetric key parser 'x509' registered Jan 17 00:21:10.161715 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:21:10.161735 kernel: io scheduler mq-deadline registered Jan 17 00:21:10.161757 kernel: io scheduler kyber registered Jan 17 00:21:10.161777 kernel: io scheduler bfq registered Jan 17 00:21:10.161797 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:21:10.161820 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 00:21:10.162122 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 17 00:21:10.162153 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 17 00:21:10.162372 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 17 00:21:10.162401 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 00:21:10.162608 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 17 00:21:10.162635 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:21:10.162657 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:21:10.162677 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 00:21:10.162698 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 17 00:21:10.162727 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 17 00:21:10.162957 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 17 00:21:10.163067 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:21:10.163086 kernel: i8042: Warning: Keylock active Jan 17 00:21:10.163104 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:21:10.163123 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:21:10.163346 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 00:21:10.163542 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 00:21:10.163726 kernel: rtc_cmos 00:00: setting system clock to 2026-01-17T00:21:09 UTC (1768609269) Jan 17 00:21:10.163910 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 00:21:10.163936 kernel: intel_pstate: CPU model not supported Jan 17 00:21:10.163956 kernel: pstore: Using crash dump compression: deflate Jan 17 00:21:10.163992 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:21:10.164010 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:21:10.164031 kernel: Segment Routing with IPv6 Jan 17 00:21:10.164051 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:21:10.164075 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:21:10.164093 kernel: Key type dns_resolver registered Jan 17 00:21:10.164114 kernel: IPI shorthand broadcast: enabled Jan 17 00:21:10.164134 kernel: sched_clock: Marking stable (975006456, 170537494)->(1232558243, -87014293) Jan 17 00:21:10.164155 kernel: registered taskstats version 1 Jan 17 00:21:10.164175 kernel: Loading compiled-in X.509 certificates Jan 17 00:21:10.164195 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:21:10.164216 kernel: Key type .fscrypt registered Jan 17 00:21:10.164236 kernel: Key type fscrypt-provisioning registered Jan 17 00:21:10.164271 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:21:10.164290 kernel: ima: No architecture policies found Jan 17 00:21:10.164310 kernel: clk: Disabling unused clocks Jan 17 00:21:10.164332 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:21:10.164352 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:21:10.164372 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:21:10.164390 kernel: Run /init as init process Jan 17 00:21:10.164411 kernel: with arguments: Jan 17 00:21:10.164431 kernel: /init Jan 17 00:21:10.164456 kernel: with environment: Jan 17 00:21:10.164476 kernel: HOME=/ Jan 17 00:21:10.164495 kernel: TERM=linux Jan 17 00:21:10.164516 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:21:10.164541 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:21:10.164567 systemd[1]: Detected virtualization google. Jan 17 00:21:10.164589 systemd[1]: Detected architecture x86-64. Jan 17 00:21:10.164615 systemd[1]: Running in initrd. Jan 17 00:21:10.164635 systemd[1]: No hostname configured, using default hostname. Jan 17 00:21:10.164657 systemd[1]: Hostname set to . Jan 17 00:21:10.164680 systemd[1]: Initializing machine ID from random generator. Jan 17 00:21:10.164701 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:21:10.164722 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:21:10.164744 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:21:10.164766 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:21:10.164793 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:21:10.164814 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:21:10.164836 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:21:10.164861 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:21:10.164882 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:21:10.164898 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:21:10.164915 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:21:10.164942 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:21:10.164983 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:21:10.165054 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:21:10.165081 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:21:10.165101 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:21:10.165124 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:21:10.165152 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:21:10.165173 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:21:10.165191 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:21:10.165209 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:21:10.165230 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:21:10.165257 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:21:10.165274 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:21:10.165300 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:21:10.165327 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:21:10.165361 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:21:10.165384 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:21:10.165404 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:21:10.165425 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:21:10.165501 systemd-journald[184]: Collecting audit messages is disabled. Jan 17 00:21:10.165553 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:21:10.165575 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:21:10.165596 systemd-journald[184]: Journal started Jan 17 00:21:10.165638 systemd-journald[184]: Runtime Journal (/run/log/journal/17490432d86a45e9a8cb9f936e2cfaf2) is 8.0M, max 148.7M, 140.7M free. Jan 17 00:21:10.175193 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:21:10.174700 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:21:10.175343 systemd-modules-load[185]: Inserted module 'overlay' Jan 17 00:21:10.191248 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:21:10.195902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:21:10.224302 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:10.231635 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:21:10.240153 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:21:10.241160 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 17 00:21:10.243098 kernel: Bridge firewalling registered Jan 17 00:21:10.243455 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:21:10.253678 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:21:10.261347 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:21:10.269430 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:21:10.274165 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:21:10.296511 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:21:10.307298 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:21:10.308848 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:21:10.325396 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:21:10.339399 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:21:10.370078 systemd-resolved[214]: Positive Trust Anchors: Jan 17 00:21:10.370100 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:21:10.379270 dracut-cmdline[219]: dracut-dracut-053 Jan 17 00:21:10.379270 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:21:10.370167 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:21:10.377564 systemd-resolved[214]: Defaulting to hostname 'linux'. Jan 17 00:21:10.381395 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:21:10.397351 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:21:10.492010 kernel: SCSI subsystem initialized Jan 17 00:21:10.505029 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:21:10.518003 kernel: iscsi: registered transport (tcp) Jan 17 00:21:10.544156 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:21:10.544256 kernel: QLogic iSCSI HBA Driver Jan 17 00:21:10.604269 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:21:10.611315 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:21:10.658689 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:21:10.658788 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:21:10.658818 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:21:10.708051 kernel: raid6: avx2x4 gen() 17687 MB/s Jan 17 00:21:10.725049 kernel: raid6: avx2x2 gen() 17307 MB/s Jan 17 00:21:10.742602 kernel: raid6: avx2x1 gen() 12886 MB/s Jan 17 00:21:10.742687 kernel: raid6: using algorithm avx2x4 gen() 17687 MB/s Jan 17 00:21:10.760707 kernel: raid6: .... xor() 6592 MB/s, rmw enabled Jan 17 00:21:10.760944 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:21:10.786028 kernel: xor: automatically using best checksumming function avx Jan 17 00:21:10.975013 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:21:10.991083 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:21:10.997377 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:21:11.043403 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 17 00:21:11.050849 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:21:11.088485 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:21:11.109272 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jan 17 00:21:11.154831 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:21:11.161322 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:21:11.286764 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:21:11.306299 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:21:11.367081 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:21:11.378584 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:21:11.428200 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:21:11.428280 kernel: blk-mq: reduced tag depth to 10240 Jan 17 00:21:11.390995 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:21:11.457109 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 17 00:21:11.456807 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:21:11.481223 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:21:11.477874 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:21:11.514012 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:21:11.518580 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:21:11.682169 kernel: AES CTR mode by8 optimization enabled Jan 17 00:21:11.682225 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Jan 17 00:21:11.682603 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 17 00:21:11.682863 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 17 00:21:11.683180 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 17 00:21:11.683446 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 00:21:11.683697 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:21:11.683741 kernel: GPT:17805311 != 33554431 Jan 17 00:21:11.683765 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:21:11.683793 kernel: GPT:17805311 != 33554431 Jan 17 00:21:11.683815 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:21:11.683836 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:21:11.683861 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 17 00:21:11.518807 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:21:11.555819 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:21:11.590097 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:21:11.590437 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:11.613129 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:21:11.673683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:21:11.695051 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:21:11.790007 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (443) Jan 17 00:21:11.801033 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (457) Jan 17 00:21:11.801895 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 17 00:21:11.825622 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:11.853681 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 17 00:21:11.886310 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 17 00:21:11.886689 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 17 00:21:11.928183 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 00:21:11.956300 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:21:11.984470 disk-uuid[539]: Primary Header is updated. Jan 17 00:21:11.984470 disk-uuid[539]: Secondary Entries is updated. Jan 17 00:21:11.984470 disk-uuid[539]: Secondary Header is updated. Jan 17 00:21:12.021242 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:21:11.991367 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:21:12.042994 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:21:12.068989 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:21:12.070850 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:21:13.065938 disk-uuid[541]: The operation has completed successfully. Jan 17 00:21:13.076566 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:21:13.166124 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:21:13.166306 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:21:13.197325 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:21:13.235256 sh[566]: Success Jan 17 00:21:13.262438 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:21:13.371873 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:21:13.380851 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:21:13.416423 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:21:13.458811 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:21:13.459003 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:21:13.459031 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:21:13.468287 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:21:13.481208 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:21:13.514073 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:21:13.522314 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:21:13.523636 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:21:13.528313 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:21:13.555135 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:21:13.623714 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:21:13.623783 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:21:13.623810 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:21:13.623847 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:21:13.623874 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:21:13.646877 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:21:13.666224 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:21:13.669318 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:21:13.686343 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:21:13.771031 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:21:13.778290 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:21:13.905162 systemd-networkd[750]: lo: Link UP Jan 17 00:21:13.905183 systemd-networkd[750]: lo: Gained carrier Jan 17 00:21:13.914908 ignition[692]: Ignition 2.19.0 Jan 17 00:21:13.908248 systemd-networkd[750]: Enumeration completed Jan 17 00:21:13.914934 ignition[692]: Stage: fetch-offline Jan 17 00:21:13.908431 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:21:13.915101 ignition[692]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:13.909332 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:21:13.915126 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:21:13.909340 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:21:13.915558 ignition[692]: parsed url from cmdline: "" Jan 17 00:21:13.911923 systemd-networkd[750]: eth0: Link UP Jan 17 00:21:13.915567 ignition[692]: no config URL provided Jan 17 00:21:13.911932 systemd-networkd[750]: eth0: Gained carrier Jan 17 00:21:13.915582 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:21:13.911953 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:21:13.915610 ignition[692]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:21:13.926736 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:21:13.915625 ignition[692]: failed to fetch config: resource requires networking Jan 17 00:21:13.929166 systemd-networkd[750]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a' Jan 17 00:21:13.916436 ignition[692]: Ignition finished successfully Jan 17 00:21:13.929187 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.88/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 00:21:14.003773 ignition[759]: Ignition 2.19.0 Jan 17 00:21:13.947285 systemd[1]: Reached target network.target - Network. Jan 17 00:21:14.003785 ignition[759]: Stage: fetch Jan 17 00:21:13.968774 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:21:14.004142 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:14.017802 unknown[759]: fetched base config from "system" Jan 17 00:21:14.004157 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:21:14.017814 unknown[759]: fetched base config from "system" Jan 17 00:21:14.004323 ignition[759]: parsed url from cmdline: "" Jan 17 00:21:14.017825 unknown[759]: fetched user config from "gcp" Jan 17 00:21:14.004331 ignition[759]: no config URL provided Jan 17 00:21:14.033830 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:21:14.004344 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:21:14.062315 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:21:14.004357 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:21:14.122769 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:21:14.004387 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 17 00:21:14.142322 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:21:14.010053 ignition[759]: GET result: OK Jan 17 00:21:14.200645 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:21:14.010187 ignition[759]: parsing config with SHA512: 6794a22dd76b0168bb0813af787ce29bf891a30e2f70e6fb11a2a572c180130a1baf5ef646a9ddf1e53886f6b3e0564bd303e4d28a505ed13934b8ae5bfa27c9 Jan 17 00:21:14.219770 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:21:14.018351 ignition[759]: fetch: fetch complete Jan 17 00:21:14.238398 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:21:14.018358 ignition[759]: fetch: fetch passed Jan 17 00:21:14.259382 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:21:14.018454 ignition[759]: Ignition finished successfully Jan 17 00:21:14.265489 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:21:14.118753 ignition[765]: Ignition 2.19.0 Jan 17 00:21:14.293532 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:21:14.118763 ignition[765]: Stage: kargs Jan 17 00:21:14.324434 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:21:14.119088 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:14.119103 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:21:14.120948 ignition[765]: kargs: kargs passed Jan 17 00:21:14.121086 ignition[765]: Ignition finished successfully Jan 17 00:21:14.197565 ignition[770]: Ignition 2.19.0 Jan 17 00:21:14.197579 ignition[770]: Stage: disks Jan 17 00:21:14.197921 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:14.197936 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:21:14.199199 ignition[770]: disks: disks passed Jan 17 00:21:14.199275 ignition[770]: Ignition finished successfully Jan 17 00:21:14.386311 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 00:21:14.569407 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:21:14.598198 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:21:14.755222 kernel: EXT4-fs (sda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:21:14.756257 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:21:14.757309 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:21:14.793255 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:21:14.804171 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:21:14.849846 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (787) Jan 17 00:21:14.849911 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:21:14.849935 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:21:14.849958 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:21:14.863836 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:21:14.899323 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:21:14.899377 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:21:14.864020 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:21:14.864080 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:21:14.883656 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:21:14.908594 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:21:14.953319 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:21:15.099573 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:21:15.111215 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:21:15.122191 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:21:15.132202 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:21:15.314242 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:21:15.321271 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:21:15.363395 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:21:15.365926 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:21:15.375497 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:21:15.415051 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:21:15.428325 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:21:15.432483 ignition[899]: INFO : Ignition 2.19.0 Jan 17 00:21:15.432483 ignition[899]: INFO : Stage: mount Jan 17 00:21:15.432483 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:15.432483 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:21:15.432483 ignition[899]: INFO : mount: mount passed Jan 17 00:21:15.432483 ignition[899]: INFO : Ignition finished successfully Jan 17 00:21:15.457248 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:21:15.465232 systemd-networkd[750]: eth0: Gained IPv6LL Jan 17 00:21:15.490339 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:21:15.558029 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (911) Jan 17 00:21:15.576512 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:21:15.576619 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:21:15.576646 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:21:15.600493 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:21:15.600596 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:21:15.604089 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:21:15.648821 ignition[928]: INFO : Ignition 2.19.0 Jan 17 00:21:15.648821 ignition[928]: INFO : Stage: files Jan 17 00:21:15.663243 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:15.663243 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:21:15.663243 ignition[928]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:21:15.663243 ignition[928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:21:15.663243 ignition[928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:21:15.722212 ignition[928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:21:15.722212 ignition[928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:21:15.722212 ignition[928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:21:15.722212 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:21:15.722212 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:21:15.666012 unknown[928]: wrote ssh authorized keys file for user: core Jan 17 00:21:15.805257 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:21:15.984180 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:21:15.984180 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:21:16.018203 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 00:21:16.208533 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:21:16.362045 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:21:16.362045 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:21:16.362045 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:21:16.362045 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:21:16.362045 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:21:16.362045 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:21:16.362045 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:21:16.362045 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:21:16.494220 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:21:16.494220 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:21:16.494220 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:21:16.494220 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:21:16.494220 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:21:16.494220 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:21:16.494220 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 17 00:21:16.893309 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:21:17.470227 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:21:17.470227 ignition[928]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 00:21:17.509280 ignition[928]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:21:17.509280 ignition[928]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:21:17.509280 ignition[928]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 00:21:17.509280 ignition[928]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:21:17.509280 ignition[928]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:21:17.509280 ignition[928]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:21:17.509280 ignition[928]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:21:17.509280 ignition[928]: INFO : files: files passed Jan 17 00:21:17.509280 ignition[928]: INFO : Ignition finished successfully Jan 17 00:21:17.476409 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:21:17.495512 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:21:17.515475 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:21:17.562733 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:21:17.727273 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:21:17.727273 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:21:17.562913 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:21:17.787285 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:21:17.590170 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:21:17.599605 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:21:17.630263 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:21:17.740338 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:21:17.740491 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:21:17.742592 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:21:17.777423 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:21:17.798479 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:21:17.806325 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:21:17.849175 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:21:17.875307 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:21:17.928022 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:21:17.942661 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:21:17.966506 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:21:17.987585 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:21:17.987805 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:21:18.021542 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:21:18.041452 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:21:18.059528 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:21:18.078556 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:21:18.102503 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:21:18.124589 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:21:18.143499 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:21:18.165451 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:21:18.185495 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:21:18.205829 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:21:18.224531 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:21:18.224750 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:21:18.257617 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:21:18.277553 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:21:18.296406 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:21:18.296637 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:21:18.317468 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:21:18.317735 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:21:18.348529 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:21:18.432236 ignition[980]: INFO : Ignition 2.19.0 Jan 17 00:21:18.432236 ignition[980]: INFO : Stage: umount Jan 17 00:21:18.432236 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:18.432236 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:21:18.432236 ignition[980]: INFO : umount: umount passed Jan 17 00:21:18.432236 ignition[980]: INFO : Ignition finished successfully Jan 17 00:21:18.348771 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:21:18.372916 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:21:18.373267 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:21:18.398374 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:21:18.441322 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:21:18.441626 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:21:18.456541 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:21:18.466463 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:21:18.466786 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:21:18.482801 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:21:18.483119 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:21:18.546072 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:21:18.546260 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:21:18.570155 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:21:18.571132 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:21:18.571288 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:21:18.591190 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:21:18.591472 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:21:18.612936 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:21:18.613173 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:21:18.639455 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:21:18.639579 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:21:18.649569 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:21:18.649645 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:21:18.686475 systemd[1]: Stopped target network.target - Network. Jan 17 00:21:18.705253 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:21:18.705519 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:21:18.714598 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:21:18.750236 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:21:18.750499 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:21:18.771242 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:21:18.771448 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:21:18.797326 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:21:18.797491 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:21:18.816515 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:21:18.816864 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:21:18.839346 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:21:18.839556 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:21:18.858357 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:21:18.858472 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:21:18.878324 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:21:18.878444 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:21:18.897629 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:21:18.903121 systemd-networkd[750]: eth0: DHCPv6 lease lost Jan 17 00:21:18.917487 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:21:18.936738 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:21:18.936895 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:21:18.956233 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:21:18.956588 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:21:18.976345 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:21:18.976431 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:21:19.003217 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:21:19.006506 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:21:19.006633 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:21:19.055598 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:21:19.055693 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:21:19.064637 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:21:19.064722 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:21:19.082555 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:21:19.082653 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:21:19.112542 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:21:19.544221 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 17 00:21:19.133840 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:21:19.134077 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:21:19.162630 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:21:19.162801 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:21:19.180533 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:21:19.180592 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:21:19.197517 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:21:19.197600 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:21:19.237653 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:21:19.237746 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:21:19.274566 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:21:19.274699 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:21:19.322297 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:21:19.344179 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:21:19.344340 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:21:19.344558 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:21:19.344610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:19.375904 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:21:19.376100 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:21:19.396885 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:21:19.397057 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:21:19.421852 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:21:19.446316 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:21:19.495019 systemd[1]: Switching root. Jan 17 00:21:19.789255 systemd-journald[184]: Journal stopped Jan 17 00:21:10.150880 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:21:10.150942 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:21:10.150962 kernel: BIOS-provided physical RAM map: Jan 17 00:21:10.150995 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 17 00:21:10.151007 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 17 00:21:10.151019 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 17 00:21:10.151033 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 17 00:21:10.151051 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 17 00:21:10.151064 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 17 00:21:10.151077 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 17 00:21:10.151091 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 17 00:21:10.151103 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 17 00:21:10.151116 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 17 00:21:10.151130 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 17 00:21:10.151151 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 17 00:21:10.151165 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 17 00:21:10.151180 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 17 00:21:10.151194 kernel: NX (Execute Disable) protection: active Jan 17 00:21:10.151209 kernel: APIC: Static calls initialized Jan 17 00:21:10.151223 kernel: efi: EFI v2.7 by EDK II Jan 17 00:21:10.151238 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Jan 17 00:21:10.151265 kernel: SMBIOS 2.4 present. Jan 17 00:21:10.151280 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Jan 17 00:21:10.151295 kernel: Hypervisor detected: KVM Jan 17 00:21:10.151315 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:21:10.151331 kernel: kvm-clock: using sched offset of 13353509704 cycles Jan 17 00:21:10.151348 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:21:10.151364 kernel: tsc: Detected 2299.998 MHz processor Jan 17 00:21:10.151380 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:21:10.151396 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:21:10.151412 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 17 00:21:10.151428 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 17 00:21:10.151443 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:21:10.151463 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 17 00:21:10.151480 kernel: Using GB pages for direct mapping Jan 17 00:21:10.151495 kernel: Secure boot disabled Jan 17 00:21:10.151511 kernel: ACPI: Early table checksum verification disabled Jan 17 00:21:10.151527 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 17 00:21:10.151543 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 17 00:21:10.151559 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 17 00:21:10.151583 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 17 00:21:10.151605 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 17 00:21:10.151622 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Jan 17 00:21:10.151639 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 17 00:21:10.151656 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 17 00:21:10.151674 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 17 00:21:10.151691 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 17 00:21:10.151712 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 17 00:21:10.151730 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 17 00:21:10.151747 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 17 00:21:10.151764 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 17 00:21:10.151782 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 17 00:21:10.151799 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 17 00:21:10.151816 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 17 00:21:10.151833 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 17 00:21:10.151850 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 17 00:21:10.151872 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 17 00:21:10.151890 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:21:10.151907 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:21:10.151925 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 00:21:10.151941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 17 00:21:10.151959 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 17 00:21:10.152026 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 17 00:21:10.152042 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 17 00:21:10.152061 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 17 00:21:10.152087 kernel: Zone ranges: Jan 17 00:21:10.152106 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:21:10.152126 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 00:21:10.152145 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 17 00:21:10.152165 kernel: Movable zone start for each node Jan 17 00:21:10.152185 kernel: Early memory node ranges Jan 17 00:21:10.152205 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 17 00:21:10.152224 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 17 00:21:10.152257 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 17 00:21:10.152283 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 17 00:21:10.152301 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 17 00:21:10.152320 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 17 00:21:10.152338 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:21:10.152358 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 17 00:21:10.152377 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 17 00:21:10.152394 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 00:21:10.152412 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 17 00:21:10.152428 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 00:21:10.152446 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:21:10.152467 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:21:10.152484 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:21:10.152501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:21:10.152518 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:21:10.152537 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:21:10.152555 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:21:10.152571 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:21:10.152587 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:21:10.152608 kernel: Booting paravirtualized kernel on KVM Jan 17 00:21:10.152625 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:21:10.152642 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:21:10.152660 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:21:10.152678 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:21:10.152695 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:21:10.152712 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:21:10.152729 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:21:10.152748 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:21:10.152771 kernel: random: crng init done Jan 17 00:21:10.152788 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 00:21:10.152806 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:21:10.152823 kernel: Fallback order for Node 0: 0 Jan 17 00:21:10.152841 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 17 00:21:10.152858 kernel: Policy zone: Normal Jan 17 00:21:10.152876 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:21:10.152893 kernel: software IO TLB: area num 2. Jan 17 00:21:10.152911 kernel: Memory: 7513176K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 347148K reserved, 0K cma-reserved) Jan 17 00:21:10.152935 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:21:10.152953 kernel: Kernel/User page tables isolation: enabled Jan 17 00:21:10.153033 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:21:10.153049 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:21:10.153066 kernel: Dynamic Preempt: voluntary Jan 17 00:21:10.153084 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:21:10.153103 kernel: rcu: RCU event tracing is enabled. Jan 17 00:21:10.153119 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:21:10.153159 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:21:10.153179 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:21:10.153199 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:21:10.153223 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:21:10.153253 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:21:10.153273 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:21:10.153293 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:21:10.153314 kernel: Console: colour dummy device 80x25 Jan 17 00:21:10.153338 kernel: printk: console [ttyS0] enabled Jan 17 00:21:10.153359 kernel: ACPI: Core revision 20230628 Jan 17 00:21:10.153379 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:21:10.153399 kernel: x2apic enabled Jan 17 00:21:10.153419 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:21:10.153440 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 17 00:21:10.153460 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 00:21:10.153481 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 17 00:21:10.153500 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 17 00:21:10.153524 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 17 00:21:10.153544 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:21:10.153563 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 00:21:10.153583 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 00:21:10.153604 kernel: Spectre V2 : Mitigation: IBRS Jan 17 00:21:10.153623 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:21:10.153644 kernel: RETBleed: Mitigation: IBRS Jan 17 00:21:10.153661 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:21:10.153680 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 17 00:21:10.153704 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:21:10.153724 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 00:21:10.153744 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:21:10.153763 kernel: active return thunk: its_return_thunk Jan 17 00:21:10.153783 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:21:10.153803 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:21:10.153824 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:21:10.153844 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:21:10.153865 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:21:10.153892 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 00:21:10.153912 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:21:10.153932 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:21:10.153953 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:21:10.154003 kernel: landlock: Up and running. Jan 17 00:21:10.154024 kernel: SELinux: Initializing. Jan 17 00:21:10.154044 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:21:10.154065 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:21:10.154085 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 17 00:21:10.154111 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:21:10.154132 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:21:10.154152 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:21:10.154173 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 17 00:21:10.154194 kernel: signal: max sigframe size: 1776 Jan 17 00:21:10.154215 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:21:10.154236 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:21:10.154265 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:21:10.154285 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:21:10.154311 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:21:10.154331 kernel: .... node #0, CPUs: #1 Jan 17 00:21:10.154354 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 00:21:10.154375 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:21:10.154396 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:21:10.154416 kernel: smpboot: Max logical packages: 1 Jan 17 00:21:10.154436 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 17 00:21:10.154457 kernel: devtmpfs: initialized Jan 17 00:21:10.154482 kernel: x86/mm: Memory block size: 128MB Jan 17 00:21:10.154502 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 17 00:21:10.154523 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:21:10.154544 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:21:10.154565 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:21:10.154585 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:21:10.154605 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:21:10.154625 kernel: audit: type=2000 audit(1768609268.636:1): state=initialized audit_enabled=0 res=1 Jan 17 00:21:10.154645 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:21:10.154671 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:21:10.154692 kernel: cpuidle: using governor menu Jan 17 00:21:10.154712 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:21:10.154732 kernel: dca service started, version 1.12.1 Jan 17 00:21:10.154753 kernel: PCI: Using configuration type 1 for base access Jan 17 00:21:10.154773 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:21:10.154794 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:21:10.154814 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:21:10.154835 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:21:10.154860 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:21:10.154880 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:21:10.154900 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:21:10.154920 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:21:10.154940 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 00:21:10.154976 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:21:10.154995 kernel: ACPI: Interpreter enabled Jan 17 00:21:10.155009 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:21:10.155024 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:21:10.155046 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:21:10.155061 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 00:21:10.155079 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 00:21:10.155095 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:21:10.155433 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:21:10.155651 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:21:10.155844 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:21:10.155878 kernel: PCI host bridge to bus 0000:00 Jan 17 00:21:10.156129 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:21:10.156326 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:21:10.156498 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:21:10.156668 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 17 00:21:10.156841 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:21:10.157084 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:21:10.157346 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 17 00:21:10.157574 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 00:21:10.157795 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 00:21:10.158046 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 17 00:21:10.158266 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 17 00:21:10.158467 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 17 00:21:10.158689 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:21:10.158897 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 17 00:21:10.159159 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 17 00:21:10.159385 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:21:10.159587 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 17 00:21:10.159786 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 17 00:21:10.159813 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:21:10.159844 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:21:10.159863 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:21:10.159882 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:21:10.159901 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:21:10.159920 kernel: iommu: Default domain type: Translated Jan 17 00:21:10.159939 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:21:10.159958 kernel: efivars: Registered efivars operations Jan 17 00:21:10.159997 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:21:10.160017 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:21:10.160042 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 17 00:21:10.160062 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 17 00:21:10.160080 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 17 00:21:10.160098 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 17 00:21:10.160116 kernel: vgaarb: loaded Jan 17 00:21:10.160135 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:21:10.160154 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:21:10.160173 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:21:10.160192 kernel: pnp: PnP ACPI init Jan 17 00:21:10.160215 kernel: pnp: PnP ACPI: found 7 devices Jan 17 00:21:10.160236 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:21:10.160264 kernel: NET: Registered PF_INET protocol family Jan 17 00:21:10.160284 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:21:10.160303 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 00:21:10.160323 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:21:10.160342 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:21:10.160361 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 00:21:10.160380 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 00:21:10.160403 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:21:10.160422 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:21:10.160442 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:21:10.160461 kernel: NET: Registered PF_XDP protocol family Jan 17 00:21:10.160671 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:21:10.160848 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:21:10.161047 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:21:10.161221 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 17 00:21:10.161478 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:21:10.161505 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:21:10.161525 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 00:21:10.161543 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 17 00:21:10.161564 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:21:10.161585 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 00:21:10.161606 kernel: clocksource: Switched to clocksource tsc Jan 17 00:21:10.161627 kernel: Initialise system trusted keyrings Jan 17 00:21:10.161654 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 00:21:10.161675 kernel: Key type asymmetric registered Jan 17 00:21:10.161694 kernel: Asymmetric key parser 'x509' registered Jan 17 00:21:10.161715 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:21:10.161735 kernel: io scheduler mq-deadline registered Jan 17 00:21:10.161757 kernel: io scheduler kyber registered Jan 17 00:21:10.161777 kernel: io scheduler bfq registered Jan 17 00:21:10.161797 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:21:10.161820 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 00:21:10.162122 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 17 00:21:10.162153 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 17 00:21:10.162372 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 17 00:21:10.162401 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 00:21:10.162608 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 17 00:21:10.162635 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:21:10.162657 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:21:10.162677 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 00:21:10.162698 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 17 00:21:10.162727 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 17 00:21:10.162957 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 17 00:21:10.163067 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:21:10.163086 kernel: i8042: Warning: Keylock active Jan 17 00:21:10.163104 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:21:10.163123 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:21:10.163346 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 00:21:10.163542 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 00:21:10.163726 kernel: rtc_cmos 00:00: setting system clock to 2026-01-17T00:21:09 UTC (1768609269) Jan 17 00:21:10.163910 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 00:21:10.163936 kernel: intel_pstate: CPU model not supported Jan 17 00:21:10.163956 kernel: pstore: Using crash dump compression: deflate Jan 17 00:21:10.163992 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:21:10.164010 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:21:10.164031 kernel: Segment Routing with IPv6 Jan 17 00:21:10.164051 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:21:10.164075 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:21:10.164093 kernel: Key type dns_resolver registered Jan 17 00:21:10.164114 kernel: IPI shorthand broadcast: enabled Jan 17 00:21:10.164134 kernel: sched_clock: Marking stable (975006456, 170537494)->(1232558243, -87014293) Jan 17 00:21:10.164155 kernel: registered taskstats version 1 Jan 17 00:21:10.164175 kernel: Loading compiled-in X.509 certificates Jan 17 00:21:10.164195 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:21:10.164216 kernel: Key type .fscrypt registered Jan 17 00:21:10.164236 kernel: Key type fscrypt-provisioning registered Jan 17 00:21:10.164271 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:21:10.164290 kernel: ima: No architecture policies found Jan 17 00:21:10.164310 kernel: clk: Disabling unused clocks Jan 17 00:21:10.164332 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:21:10.164352 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:21:10.164372 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:21:10.164390 kernel: Run /init as init process Jan 17 00:21:10.164411 kernel: with arguments: Jan 17 00:21:10.164431 kernel: /init Jan 17 00:21:10.164456 kernel: with environment: Jan 17 00:21:10.164476 kernel: HOME=/ Jan 17 00:21:10.164495 kernel: TERM=linux Jan 17 00:21:10.164516 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:21:10.164541 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:21:10.164567 systemd[1]: Detected virtualization google. Jan 17 00:21:10.164589 systemd[1]: Detected architecture x86-64. Jan 17 00:21:10.164615 systemd[1]: Running in initrd. Jan 17 00:21:10.164635 systemd[1]: No hostname configured, using default hostname. Jan 17 00:21:10.164657 systemd[1]: Hostname set to . Jan 17 00:21:10.164680 systemd[1]: Initializing machine ID from random generator. Jan 17 00:21:10.164701 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:21:10.164722 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:21:10.164744 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:21:10.164766 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:21:10.164793 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:21:10.164814 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:21:10.164836 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:21:10.164861 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:21:10.164882 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:21:10.164898 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:21:10.164915 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:21:10.164942 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:21:10.164983 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:21:10.165054 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:21:10.165081 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:21:10.165101 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:21:10.165124 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:21:10.165152 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:21:10.165173 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:21:10.165191 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:21:10.165209 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:21:10.165230 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:21:10.165257 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:21:10.165274 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:21:10.165300 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:21:10.165327 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:21:10.165361 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:21:10.165384 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:21:10.165404 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:21:10.165425 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:21:10.165501 systemd-journald[184]: Collecting audit messages is disabled. Jan 17 00:21:10.165553 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:21:10.165575 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:21:10.165596 systemd-journald[184]: Journal started Jan 17 00:21:10.165638 systemd-journald[184]: Runtime Journal (/run/log/journal/17490432d86a45e9a8cb9f936e2cfaf2) is 8.0M, max 148.7M, 140.7M free. Jan 17 00:21:10.175193 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:21:10.174700 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:21:10.175343 systemd-modules-load[185]: Inserted module 'overlay' Jan 17 00:21:10.191248 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:21:10.195902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:21:10.224302 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:10.231635 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:21:10.240153 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:21:10.241160 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 17 00:21:10.243098 kernel: Bridge firewalling registered Jan 17 00:21:10.243455 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:21:10.253678 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:21:10.261347 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:21:10.269430 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:21:10.274165 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:21:10.296511 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:21:10.307298 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:21:10.308848 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:21:10.325396 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:21:10.339399 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:21:10.370078 systemd-resolved[214]: Positive Trust Anchors: Jan 17 00:21:10.370100 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:21:10.379270 dracut-cmdline[219]: dracut-dracut-053 Jan 17 00:21:10.379270 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:21:10.370167 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:21:10.377564 systemd-resolved[214]: Defaulting to hostname 'linux'. Jan 17 00:21:10.381395 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:21:10.397351 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:21:10.492010 kernel: SCSI subsystem initialized Jan 17 00:21:10.505029 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:21:10.518003 kernel: iscsi: registered transport (tcp) Jan 17 00:21:10.544156 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:21:10.544256 kernel: QLogic iSCSI HBA Driver Jan 17 00:21:10.604269 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:21:10.611315 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:21:10.658689 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:21:10.658788 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:21:10.658818 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:21:10.708051 kernel: raid6: avx2x4 gen() 17687 MB/s Jan 17 00:21:10.725049 kernel: raid6: avx2x2 gen() 17307 MB/s Jan 17 00:21:10.742602 kernel: raid6: avx2x1 gen() 12886 MB/s Jan 17 00:21:10.742687 kernel: raid6: using algorithm avx2x4 gen() 17687 MB/s Jan 17 00:21:10.760707 kernel: raid6: .... xor() 6592 MB/s, rmw enabled Jan 17 00:21:10.760944 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:21:10.786028 kernel: xor: automatically using best checksumming function avx Jan 17 00:21:10.975013 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:21:10.991083 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:21:10.997377 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:21:11.043403 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 17 00:21:11.050849 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:21:11.088485 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:21:11.109272 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jan 17 00:21:11.154831 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:21:11.161322 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:21:11.286764 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:21:11.306299 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:21:11.367081 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:21:11.378584 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:21:11.428200 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:21:11.428280 kernel: blk-mq: reduced tag depth to 10240 Jan 17 00:21:11.390995 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:21:11.457109 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 17 00:21:11.456807 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:21:11.481223 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:21:11.477874 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:21:11.514012 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:21:11.518580 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:21:11.682169 kernel: AES CTR mode by8 optimization enabled Jan 17 00:21:11.682225 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Jan 17 00:21:11.682603 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 17 00:21:11.682863 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 17 00:21:11.683180 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 17 00:21:11.683446 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 00:21:11.683697 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:21:11.683741 kernel: GPT:17805311 != 33554431 Jan 17 00:21:11.683765 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:21:11.683793 kernel: GPT:17805311 != 33554431 Jan 17 00:21:11.683815 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:21:11.683836 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:21:11.683861 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 17 00:21:11.518807 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:21:11.555819 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:21:11.590097 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:21:11.590437 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:11.613129 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:21:11.673683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:21:11.695051 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:21:11.790007 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (443) Jan 17 00:21:11.801033 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (457) Jan 17 00:21:11.801895 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 17 00:21:11.825622 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:11.853681 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 17 00:21:11.886310 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 17 00:21:11.886689 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 17 00:21:11.928183 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 00:21:11.956300 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:21:11.984470 disk-uuid[539]: Primary Header is updated. Jan 17 00:21:11.984470 disk-uuid[539]: Secondary Entries is updated. Jan 17 00:21:11.984470 disk-uuid[539]: Secondary Header is updated. Jan 17 00:21:12.021242 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:21:11.991367 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:21:12.042994 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:21:12.068989 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:21:12.070850 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:21:13.065938 disk-uuid[541]: The operation has completed successfully. Jan 17 00:21:13.076566 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:21:13.166124 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:21:13.166306 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:21:13.197325 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:21:13.235256 sh[566]: Success Jan 17 00:21:13.262438 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:21:13.371873 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:21:13.380851 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:21:13.416423 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:21:13.458811 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:21:13.459003 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:21:13.459031 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:21:13.468287 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:21:13.481208 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:21:13.514073 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:21:13.522314 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:21:13.523636 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:21:13.528313 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:21:13.555135 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:21:13.623714 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:21:13.623783 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:21:13.623810 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:21:13.623847 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:21:13.623874 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:21:13.646877 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:21:13.666224 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:21:13.669318 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:21:13.686343 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:21:13.771031 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:21:13.778290 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:21:13.905162 systemd-networkd[750]: lo: Link UP Jan 17 00:21:13.905183 systemd-networkd[750]: lo: Gained carrier Jan 17 00:21:13.914908 ignition[692]: Ignition 2.19.0 Jan 17 00:21:13.908248 systemd-networkd[750]: Enumeration completed Jan 17 00:21:13.914934 ignition[692]: Stage: fetch-offline Jan 17 00:21:13.908431 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:21:13.915101 ignition[692]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:13.909332 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:21:13.915126 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:21:13.909340 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:21:13.915558 ignition[692]: parsed url from cmdline: "" Jan 17 00:21:13.911923 systemd-networkd[750]: eth0: Link UP Jan 17 00:21:13.915567 ignition[692]: no config URL provided Jan 17 00:21:13.911932 systemd-networkd[750]: eth0: Gained carrier Jan 17 00:21:13.915582 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:21:13.911953 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:21:13.915610 ignition[692]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:21:13.926736 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:21:13.915625 ignition[692]: failed to fetch config: resource requires networking Jan 17 00:21:13.929166 systemd-networkd[750]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a' Jan 17 00:21:13.916436 ignition[692]: Ignition finished successfully Jan 17 00:21:13.929187 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.88/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 00:21:14.003773 ignition[759]: Ignition 2.19.0 Jan 17 00:21:13.947285 systemd[1]: Reached target network.target - Network. Jan 17 00:21:14.003785 ignition[759]: Stage: fetch Jan 17 00:21:13.968774 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:21:14.004142 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:14.017802 unknown[759]: fetched base config from "system" Jan 17 00:21:14.004157 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:21:14.017814 unknown[759]: fetched base config from "system" Jan 17 00:21:14.004323 ignition[759]: parsed url from cmdline: "" Jan 17 00:21:14.017825 unknown[759]: fetched user config from "gcp" Jan 17 00:21:14.004331 ignition[759]: no config URL provided Jan 17 00:21:14.033830 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:21:14.004344 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:21:14.062315 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:21:14.004357 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:21:14.122769 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:21:14.004387 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 17 00:21:14.142322 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:21:14.010053 ignition[759]: GET result: OK Jan 17 00:21:14.200645 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:21:14.010187 ignition[759]: parsing config with SHA512: 6794a22dd76b0168bb0813af787ce29bf891a30e2f70e6fb11a2a572c180130a1baf5ef646a9ddf1e53886f6b3e0564bd303e4d28a505ed13934b8ae5bfa27c9 Jan 17 00:21:14.219770 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:21:14.018351 ignition[759]: fetch: fetch complete Jan 17 00:21:14.238398 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:21:14.018358 ignition[759]: fetch: fetch passed Jan 17 00:21:14.259382 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:21:14.018454 ignition[759]: Ignition finished successfully Jan 17 00:21:14.265489 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:21:14.118753 ignition[765]: Ignition 2.19.0 Jan 17 00:21:14.293532 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:21:14.118763 ignition[765]: Stage: kargs Jan 17 00:21:14.324434 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:21:14.119088 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:14.119103 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:21:14.120948 ignition[765]: kargs: kargs passed Jan 17 00:21:14.121086 ignition[765]: Ignition finished successfully Jan 17 00:21:14.197565 ignition[770]: Ignition 2.19.0 Jan 17 00:21:14.197579 ignition[770]: Stage: disks Jan 17 00:21:14.197921 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:14.197936 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:21:14.199199 ignition[770]: disks: disks passed Jan 17 00:21:14.199275 ignition[770]: Ignition finished successfully Jan 17 00:21:14.386311 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 00:21:14.569407 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:21:14.598198 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:21:14.755222 kernel: EXT4-fs (sda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:21:14.756257 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:21:14.757309 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:21:14.793255 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:21:14.804171 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:21:14.849846 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (787) Jan 17 00:21:14.849911 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:21:14.849935 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:21:14.849958 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:21:14.863836 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:21:14.899323 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:21:14.899377 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:21:14.864020 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:21:14.864080 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:21:14.883656 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:21:14.908594 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:21:14.953319 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:21:15.099573 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:21:15.111215 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:21:15.122191 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:21:15.132202 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:21:15.314242 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:21:15.321271 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:21:15.363395 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:21:15.365926 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:21:15.375497 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:21:15.415051 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:21:15.428325 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:21:15.432483 ignition[899]: INFO : Ignition 2.19.0 Jan 17 00:21:15.432483 ignition[899]: INFO : Stage: mount Jan 17 00:21:15.432483 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:15.432483 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:21:15.432483 ignition[899]: INFO : mount: mount passed Jan 17 00:21:15.432483 ignition[899]: INFO : Ignition finished successfully Jan 17 00:21:15.457248 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:21:15.465232 systemd-networkd[750]: eth0: Gained IPv6LL Jan 17 00:21:15.490339 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:21:15.558029 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (911) Jan 17 00:21:15.576512 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:21:15.576619 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:21:15.576646 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:21:15.600493 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:21:15.600596 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:21:15.604089 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:21:15.648821 ignition[928]: INFO : Ignition 2.19.0 Jan 17 00:21:15.648821 ignition[928]: INFO : Stage: files Jan 17 00:21:15.663243 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:15.663243 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:21:15.663243 ignition[928]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:21:15.663243 ignition[928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:21:15.663243 ignition[928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:21:15.722212 ignition[928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:21:15.722212 ignition[928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:21:15.722212 ignition[928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:21:15.722212 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:21:15.722212 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:21:15.666012 unknown[928]: wrote ssh authorized keys file for user: core Jan 17 00:21:15.805257 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:21:15.984180 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:21:15.984180 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:21:16.018203 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 00:21:16.208533 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:21:16.362045 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:21:16.362045 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:21:16.362045 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:21:16.362045 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:21:16.362045 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:21:16.362045 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:21:16.362045 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:21:16.362045 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:21:16.494220 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:21:16.494220 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:21:16.494220 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:21:16.494220 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:21:16.494220 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:21:16.494220 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:21:16.494220 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 17 00:21:16.893309 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:21:17.470227 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:21:17.470227 ignition[928]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 00:21:17.509280 ignition[928]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:21:17.509280 ignition[928]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:21:17.509280 ignition[928]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 00:21:17.509280 ignition[928]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:21:17.509280 ignition[928]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:21:17.509280 ignition[928]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:21:17.509280 ignition[928]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:21:17.509280 ignition[928]: INFO : files: files passed Jan 17 00:21:17.509280 ignition[928]: INFO : Ignition finished successfully Jan 17 00:21:17.476409 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:21:17.495512 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:21:17.515475 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:21:17.562733 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:21:17.727273 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:21:17.727273 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:21:17.562913 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:21:17.787285 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:21:17.590170 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:21:17.599605 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:21:17.630263 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:21:17.740338 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:21:17.740491 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:21:17.742592 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:21:17.777423 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:21:17.798479 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:21:17.806325 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:21:17.849175 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:21:17.875307 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:21:17.928022 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:21:17.942661 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:21:17.966506 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:21:17.987585 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:21:17.987805 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:21:18.021542 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:21:18.041452 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:21:18.059528 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:21:18.078556 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:21:18.102503 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:21:18.124589 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:21:18.143499 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:21:18.165451 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:21:18.185495 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:21:18.205829 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:21:18.224531 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:21:18.224750 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:21:18.257617 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:21:18.277553 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:21:18.296406 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:21:18.296637 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:21:18.317468 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:21:18.317735 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:21:18.348529 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:21:18.432236 ignition[980]: INFO : Ignition 2.19.0 Jan 17 00:21:18.432236 ignition[980]: INFO : Stage: umount Jan 17 00:21:18.432236 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:18.432236 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 00:21:18.432236 ignition[980]: INFO : umount: umount passed Jan 17 00:21:18.432236 ignition[980]: INFO : Ignition finished successfully Jan 17 00:21:18.348771 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:21:18.372916 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:21:18.373267 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:21:18.398374 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:21:18.441322 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:21:18.441626 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:21:18.456541 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:21:18.466463 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:21:18.466786 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:21:18.482801 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:21:18.483119 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:21:18.546072 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:21:18.546260 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:21:18.570155 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:21:18.571132 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:21:18.571288 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:21:18.591190 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:21:18.591472 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:21:18.612936 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:21:18.613173 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:21:18.639455 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:21:18.639579 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:21:18.649569 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:21:18.649645 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:21:18.686475 systemd[1]: Stopped target network.target - Network. Jan 17 00:21:18.705253 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:21:18.705519 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:21:18.714598 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:21:18.750236 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:21:18.750499 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:21:18.771242 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:21:18.771448 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:21:18.797326 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:21:18.797491 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:21:18.816515 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:21:18.816864 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:21:18.839346 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:21:18.839556 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:21:18.858357 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:21:18.858472 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:21:18.878324 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:21:18.878444 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:21:18.897629 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:21:18.903121 systemd-networkd[750]: eth0: DHCPv6 lease lost Jan 17 00:21:18.917487 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:21:18.936738 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:21:18.936895 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:21:18.956233 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:21:18.956588 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:21:18.976345 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:21:18.976431 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:21:19.003217 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:21:19.006506 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:21:19.006633 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:21:19.055598 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:21:19.055693 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:21:19.064637 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:21:19.064722 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:21:19.082555 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:21:19.082653 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:21:19.112542 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:21:19.544221 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 17 00:21:19.133840 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:21:19.134077 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:21:19.162630 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:21:19.162801 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:21:19.180533 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:21:19.180592 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:21:19.197517 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:21:19.197600 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:21:19.237653 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:21:19.237746 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:21:19.274566 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:21:19.274699 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:21:19.322297 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:21:19.344179 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:21:19.344340 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:21:19.344558 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:21:19.344610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:19.375904 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:21:19.376100 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:21:19.396885 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:21:19.397057 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:21:19.421852 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:21:19.446316 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:21:19.495019 systemd[1]: Switching root. Jan 17 00:21:19.789255 systemd-journald[184]: Journal stopped Jan 17 00:21:22.635412 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:21:22.635496 kernel: SELinux: policy capability open_perms=1 Jan 17 00:21:22.635521 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:21:22.635541 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:21:22.635562 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:21:22.635581 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:21:22.635604 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:21:22.635630 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:21:22.635651 kernel: audit: type=1403 audit(1768609280.323:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:21:22.635675 systemd[1]: Successfully loaded SELinux policy in 85.965ms. Jan 17 00:21:22.635701 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.849ms. Jan 17 00:21:22.635726 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:21:22.635748 systemd[1]: Detected virtualization google. Jan 17 00:21:22.635770 systemd[1]: Detected architecture x86-64. Jan 17 00:21:22.635799 systemd[1]: Detected first boot. Jan 17 00:21:22.635823 systemd[1]: Initializing machine ID from random generator. Jan 17 00:21:22.635846 zram_generator::config[1021]: No configuration found. Jan 17 00:21:22.635869 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:21:22.635892 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:21:22.635920 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:21:22.635942 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:21:22.635992 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:21:22.636014 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:21:22.636033 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:21:22.636059 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:21:22.636102 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:21:22.636136 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:21:22.636161 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:21:22.636184 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:21:22.636218 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:21:22.636243 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:21:22.636266 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:21:22.636289 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:21:22.636313 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:21:22.636340 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:21:22.636364 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:21:22.636387 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:21:22.636409 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:21:22.636433 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:21:22.636456 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:21:22.636486 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:21:22.636511 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:21:22.636535 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:21:22.636571 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:21:22.636596 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:21:22.636619 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:21:22.636643 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:21:22.636671 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:21:22.636694 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:21:22.636718 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:21:22.636750 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:21:22.636776 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:21:22.636800 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:21:22.636824 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:21:22.636848 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:22.636877 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:21:22.636902 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:21:22.636924 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:21:22.636951 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:21:22.637011 systemd[1]: Reached target machines.target - Containers. Jan 17 00:21:22.637038 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:21:22.637062 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:21:22.637085 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:21:22.637116 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:21:22.637143 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:21:22.637167 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:21:22.637204 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:21:22.637232 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:21:22.637259 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:21:22.637285 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:21:22.637310 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:21:22.637342 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:21:22.637366 kernel: ACPI: bus type drm_connector registered Jan 17 00:21:22.637388 kernel: fuse: init (API version 7.39) Jan 17 00:21:22.637412 kernel: loop: module loaded Jan 17 00:21:22.637435 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:21:22.637460 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:21:22.637484 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:21:22.637509 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:21:22.637535 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:21:22.637624 systemd-journald[1108]: Collecting audit messages is disabled. Jan 17 00:21:22.637677 systemd-journald[1108]: Journal started Jan 17 00:21:22.637728 systemd-journald[1108]: Runtime Journal (/run/log/journal/324794e7285642558ad56479e5f4d84e) is 8.0M, max 148.7M, 140.7M free. Jan 17 00:21:21.368951 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:21:21.392181 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:21:21.393235 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:21:22.662022 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:21:22.696748 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:21:22.696907 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:21:22.704021 systemd[1]: Stopped verity-setup.service. Jan 17 00:21:22.736211 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:22.746016 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:21:22.757875 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:21:22.768603 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:21:22.779569 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:21:22.790552 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:21:22.801563 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:21:22.812542 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:21:22.823793 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:21:22.835885 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:21:22.847748 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:21:22.848042 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:21:22.860706 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:21:22.860991 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:21:22.872718 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:21:22.873006 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:21:22.883691 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:21:22.883981 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:21:22.896696 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:21:22.896996 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:21:22.907737 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:21:22.908004 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:21:22.918723 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:21:22.929772 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:21:22.941672 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:21:22.953722 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:21:22.983304 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:21:23.001207 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:21:23.022139 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:21:23.033267 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:21:23.033568 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:21:23.047419 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:21:23.069423 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:21:23.089411 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:21:23.099564 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:21:23.108899 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:21:23.126949 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:21:23.138270 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:21:23.146376 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:21:23.156389 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:21:23.176918 systemd-journald[1108]: Time spent on flushing to /var/log/journal/324794e7285642558ad56479e5f4d84e is 115.779ms for 931 entries. Jan 17 00:21:23.176918 systemd-journald[1108]: System Journal (/var/log/journal/324794e7285642558ad56479e5f4d84e) is 8.0M, max 584.8M, 576.8M free. Jan 17 00:21:23.360831 systemd-journald[1108]: Received client request to flush runtime journal. Jan 17 00:21:23.361371 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 00:21:23.170302 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:21:23.199300 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:21:23.221341 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:21:23.242314 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:21:23.261704 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:21:23.274461 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:21:23.286825 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:21:23.303887 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:21:23.333463 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:21:23.356404 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:21:23.366627 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:21:23.377514 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:21:23.421249 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:21:23.425748 udevadm[1141]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 00:21:23.481568 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 00:21:23.487366 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:21:23.495774 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:21:23.519926 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:21:23.546522 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:21:23.584043 kernel: loop2: detected capacity change from 0 to 229808 Jan 17 00:21:23.630027 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jan 17 00:21:23.630086 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jan 17 00:21:23.643646 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:21:23.739057 kernel: loop3: detected capacity change from 0 to 54824 Jan 17 00:21:23.844022 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 00:21:23.918097 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 00:21:23.979020 kernel: loop6: detected capacity change from 0 to 229808 Jan 17 00:21:24.031463 kernel: loop7: detected capacity change from 0 to 54824 Jan 17 00:21:24.066346 (sd-merge)[1163]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 17 00:21:24.067315 (sd-merge)[1163]: Merged extensions into '/usr'. Jan 17 00:21:24.077164 systemd[1]: Reloading requested from client PID 1139 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:21:24.077190 systemd[1]: Reloading... Jan 17 00:21:24.269412 zram_generator::config[1189]: No configuration found. Jan 17 00:21:24.598926 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:21:24.616089 ldconfig[1134]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:21:24.707629 systemd[1]: Reloading finished in 629 ms. Jan 17 00:21:24.738440 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:21:24.749080 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:21:24.779515 systemd[1]: Starting ensure-sysext.service... Jan 17 00:21:24.797311 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:21:24.810039 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:21:24.829317 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:21:24.830524 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:21:24.832573 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:21:24.833193 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jan 17 00:21:24.833318 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jan 17 00:21:24.838370 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:21:24.841332 systemd-tmpfiles[1231]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:21:24.841355 systemd-tmpfiles[1231]: Skipping /boot Jan 17 00:21:24.851762 systemd[1]: Reloading requested from client PID 1230 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:21:24.851781 systemd[1]: Reloading... Jan 17 00:21:24.859254 systemd-tmpfiles[1231]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:21:24.859287 systemd-tmpfiles[1231]: Skipping /boot Jan 17 00:21:24.938245 systemd-udevd[1234]: Using default interface naming scheme 'v255'. Jan 17 00:21:25.042058 zram_generator::config[1261]: No configuration found. Jan 17 00:21:25.366060 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:21:25.374881 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:21:25.390019 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 17 00:21:25.428423 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:21:25.446851 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 17 00:21:25.454094 kernel: ACPI: button: Sleep Button [SLPF] Jan 17 00:21:25.524457 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:21:25.524762 systemd[1]: Reloading finished in 672 ms. Jan 17 00:21:25.553142 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 00:21:25.553300 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1278) Jan 17 00:21:25.561303 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:21:25.582895 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:21:25.596994 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:21:25.635104 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:25.645575 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:21:25.665631 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:21:25.677459 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:21:25.685521 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:21:25.702515 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:21:25.719008 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:21:25.730413 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:21:25.740718 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:21:25.759208 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:21:25.782057 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:21:25.801536 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:21:25.813229 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:25.824607 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:21:25.826235 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:21:25.839492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:21:25.840469 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:21:25.851253 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:21:25.858168 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:21:25.858459 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:21:25.910921 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:21:25.912508 augenrules[1359]: No rules Jan 17 00:21:25.923820 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:21:25.935147 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:21:25.971939 systemd[1]: Finished ensure-sysext.service. Jan 17 00:21:25.980797 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:21:25.992763 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:21:26.020315 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 00:21:26.032540 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:26.032903 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:21:26.038358 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:21:26.061374 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:21:26.083397 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:21:26.087312 lvm[1370]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:21:26.101417 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:21:26.121360 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:21:26.124298 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 00:21:26.140345 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:21:26.142538 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:21:26.154486 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:21:26.175337 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:21:26.178069 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:21:26.192473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:21:26.192589 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:21:26.192642 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:26.195829 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:21:26.196625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:21:26.196889 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:21:26.197546 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:21:26.198902 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:21:26.199938 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:21:26.200516 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:21:26.243874 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:21:26.244713 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:21:26.262504 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:21:26.273322 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:21:26.288315 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:21:26.299266 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 00:21:26.340322 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:21:26.348305 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:21:26.368357 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 17 00:21:26.368544 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:21:26.368660 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:21:26.400827 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:21:26.442616 systemd-networkd[1346]: lo: Link UP Jan 17 00:21:26.443098 systemd-networkd[1346]: lo: Gained carrier Jan 17 00:21:26.451881 systemd-networkd[1346]: Enumeration completed Jan 17 00:21:26.452182 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:21:26.454731 systemd-networkd[1346]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:21:26.454746 systemd-networkd[1346]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:21:26.455768 systemd-networkd[1346]: eth0: Link UP Jan 17 00:21:26.455776 systemd-networkd[1346]: eth0: Gained carrier Jan 17 00:21:26.455817 systemd-networkd[1346]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:21:26.461320 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:21:26.469306 systemd-networkd[1346]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a' Jan 17 00:21:26.469338 systemd-networkd[1346]: eth0: DHCPv4 address 10.128.0.88/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 00:21:26.471126 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:21:26.490122 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 17 00:21:26.500173 systemd-resolved[1348]: Positive Trust Anchors: Jan 17 00:21:26.500780 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:21:26.500868 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:21:26.503676 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:26.509502 systemd-resolved[1348]: Defaulting to hostname 'linux'. Jan 17 00:21:26.515487 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:21:26.526245 systemd[1]: Reached target network.target - Network. Jan 17 00:21:26.535227 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:21:26.547265 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:21:26.557497 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:21:26.569408 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:21:26.581597 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:21:26.592711 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:21:26.605302 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:21:26.617216 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:21:26.617291 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:21:26.626241 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:21:26.636130 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:21:26.648299 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:21:26.672284 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:21:26.684686 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:21:26.695519 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:21:26.705219 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:21:26.714278 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:21:26.714339 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:21:26.720204 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:21:26.744246 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:21:26.761650 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:21:26.782933 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:21:26.795502 jq[1421]: false Jan 17 00:21:26.804434 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:21:26.815213 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:21:26.822834 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:21:26.841276 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 00:21:26.860747 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:21:26.879330 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:21:26.899536 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:21:26.926418 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:21:26.937692 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 17 00:21:26.939690 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:21:26.946335 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:21:26.959603 extend-filesystems[1422]: Found loop4 Jan 17 00:21:26.959603 extend-filesystems[1422]: Found loop5 Jan 17 00:21:26.959603 extend-filesystems[1422]: Found loop6 Jan 17 00:21:26.959603 extend-filesystems[1422]: Found loop7 Jan 17 00:21:26.959603 extend-filesystems[1422]: Found sda Jan 17 00:21:26.959603 extend-filesystems[1422]: Found sda1 Jan 17 00:21:26.959603 extend-filesystems[1422]: Found sda2 Jan 17 00:21:26.959603 extend-filesystems[1422]: Found sda3 Jan 17 00:21:26.959603 extend-filesystems[1422]: Found usr Jan 17 00:21:26.959603 extend-filesystems[1422]: Found sda4 Jan 17 00:21:26.959603 extend-filesystems[1422]: Found sda6 Jan 17 00:21:26.959603 extend-filesystems[1422]: Found sda7 Jan 17 00:21:27.123211 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: ---------------------------------------------------- Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: corporation. Support and training for ntp-4 are Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: available at https://www.nwtime.org/support Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: ---------------------------------------------------- Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: proto: precision = 0.089 usec (-23) Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: basedate set to 2026-01-04 Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: gps base set to 2026-01-04 (week 2400) Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: Listen normally on 3 eth0 10.128.0.88:123 Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: Listen normally on 4 lo [::1]:123 Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: bind(21) AF_INET6 fe80::4001:aff:fe80:58%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:58%2#123 Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: failed to init interface for address fe80::4001:aff:fe80:58%2 Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: Listening on routing socket on fd #21 for interface updates Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:21:27.123384 ntpd[1426]: 17 Jan 00:21:26 ntpd[1426]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:21:26.960780 ntpd[1426]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:21:26.966239 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:21:27.147203 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1283) Jan 17 00:21:27.147278 coreos-metadata[1419]: Jan 17 00:21:26.973 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 17 00:21:27.147278 coreos-metadata[1419]: Jan 17 00:21:26.976 INFO Fetch successful Jan 17 00:21:27.147278 coreos-metadata[1419]: Jan 17 00:21:26.976 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 17 00:21:27.147278 coreos-metadata[1419]: Jan 17 00:21:26.978 INFO Fetch successful Jan 17 00:21:27.147278 coreos-metadata[1419]: Jan 17 00:21:26.979 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 17 00:21:27.147278 coreos-metadata[1419]: Jan 17 00:21:26.980 INFO Fetch successful Jan 17 00:21:27.147278 coreos-metadata[1419]: Jan 17 00:21:26.980 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 17 00:21:27.147278 coreos-metadata[1419]: Jan 17 00:21:26.981 INFO Fetch successful Jan 17 00:21:27.147754 extend-filesystems[1422]: Found sda9 Jan 17 00:21:27.147754 extend-filesystems[1422]: Checking size of /dev/sda9 Jan 17 00:21:27.147754 extend-filesystems[1422]: Resized partition /dev/sda9 Jan 17 00:21:26.960817 ntpd[1426]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:21:27.021421 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:21:27.163935 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:21:27.167336 jq[1442]: true Jan 17 00:21:26.960834 ntpd[1426]: ---------------------------------------------------- Jan 17 00:21:27.054405 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:21:26.960848 ntpd[1426]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:21:27.055791 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:21:26.960863 ntpd[1426]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:21:27.056486 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:21:26.960878 ntpd[1426]: corporation. Support and training for ntp-4 are Jan 17 00:21:27.058083 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:21:26.960894 ntpd[1426]: available at https://www.nwtime.org/support Jan 17 00:21:27.087761 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:21:26.960912 ntpd[1426]: ---------------------------------------------------- Jan 17 00:21:27.089083 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:21:26.967865 ntpd[1426]: proto: precision = 0.089 usec (-23) Jan 17 00:21:27.161426 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:21:26.971286 ntpd[1426]: basedate set to 2026-01-04 Jan 17 00:21:27.161812 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:21:26.971317 ntpd[1426]: gps base set to 2026-01-04 (week 2400) Jan 17 00:21:27.161860 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:21:26.975561 ntpd[1426]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:21:27.162021 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:21:26.975634 ntpd[1426]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:21:27.162045 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:21:26.976207 ntpd[1426]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:21:26.976315 ntpd[1426]: Listen normally on 3 eth0 10.128.0.88:123 Jan 17 00:21:26.976395 ntpd[1426]: Listen normally on 4 lo [::1]:123 Jan 17 00:21:26.976476 ntpd[1426]: bind(21) AF_INET6 fe80::4001:aff:fe80:58%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:21:26.976512 ntpd[1426]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:58%2#123 Jan 17 00:21:26.976535 ntpd[1426]: failed to init interface for address fe80::4001:aff:fe80:58%2 Jan 17 00:21:26.976591 ntpd[1426]: Listening on routing socket on fd #21 for interface updates Jan 17 00:21:26.979368 ntpd[1426]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:21:26.979422 ntpd[1426]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:21:27.001383 dbus-daemon[1420]: [system] SELinux support is enabled Jan 17 00:21:27.010024 dbus-daemon[1420]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1346 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 00:21:27.173340 dbus-daemon[1420]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:21:27.186098 update_engine[1440]: I20260117 00:21:27.185262 1440 main.cc:92] Flatcar Update Engine starting Jan 17 00:21:27.209015 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Jan 17 00:21:27.207195 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 00:21:27.232416 update_engine[1440]: I20260117 00:21:27.225386 1440 update_check_scheduler.cc:74] Next update check in 8m33s Jan 17 00:21:27.238051 extend-filesystems[1450]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 17 00:21:27.238051 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 17 00:21:27.238051 extend-filesystems[1450]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Jan 17 00:21:27.238767 extend-filesystems[1422]: Resized filesystem in /dev/sda9 Jan 17 00:21:27.240600 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:21:27.243129 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:21:27.253606 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:21:27.323195 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:21:27.325514 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:21:27.340007 jq[1462]: true Jan 17 00:21:27.398039 tar[1454]: linux-amd64/LICENSE Jan 17 00:21:27.399418 tar[1454]: linux-amd64/helm Jan 17 00:21:27.403061 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:21:27.413777 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:21:27.533570 systemd-logind[1438]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:21:27.533615 systemd-logind[1438]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 17 00:21:27.533649 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:21:27.534424 systemd-logind[1438]: New seat seat0. Jan 17 00:21:27.551730 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:21:27.634157 dbus-daemon[1420]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 00:21:27.634413 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 00:21:27.638062 dbus-daemon[1420]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1465 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 00:21:27.639689 bash[1490]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:21:27.646059 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:21:27.679242 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 00:21:27.690201 systemd-networkd[1346]: eth0: Gained IPv6LL Jan 17 00:21:27.698519 systemd[1]: Starting sshkeys.service... Jan 17 00:21:27.726323 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:21:27.739117 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:21:27.760653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:27.782461 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:21:27.804773 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 17 00:21:27.862176 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:21:27.883349 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:21:27.886228 polkitd[1495]: Started polkitd version 121 Jan 17 00:21:27.898584 init.sh[1505]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 17 00:21:27.898584 init.sh[1505]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 17 00:21:27.898584 init.sh[1505]: + /usr/bin/google_instance_setup Jan 17 00:21:27.935584 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:21:27.955106 polkitd[1495]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 00:21:27.955231 polkitd[1495]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 00:21:27.965731 polkitd[1495]: Finished loading, compiling and executing 2 rules Jan 17 00:21:27.967336 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:21:27.973636 dbus-daemon[1420]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 00:21:27.977729 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 00:21:27.980817 polkitd[1495]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 00:21:28.103276 systemd-hostnamed[1465]: Hostname set to (transient) Jan 17 00:21:28.106541 systemd-resolved[1348]: System hostname changed to 'ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a'. Jan 17 00:21:28.155146 coreos-metadata[1509]: Jan 17 00:21:28.154 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 17 00:21:28.156936 coreos-metadata[1509]: Jan 17 00:21:28.156 INFO Fetch failed with 404: resource not found Jan 17 00:21:28.156936 coreos-metadata[1509]: Jan 17 00:21:28.156 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 17 00:21:28.161261 coreos-metadata[1509]: Jan 17 00:21:28.157 INFO Fetch successful Jan 17 00:21:28.161261 coreos-metadata[1509]: Jan 17 00:21:28.160 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 17 00:21:28.161261 coreos-metadata[1509]: Jan 17 00:21:28.161 INFO Fetch failed with 404: resource not found Jan 17 00:21:28.161261 coreos-metadata[1509]: Jan 17 00:21:28.161 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 17 00:21:28.165542 coreos-metadata[1509]: Jan 17 00:21:28.161 INFO Fetch failed with 404: resource not found Jan 17 00:21:28.165542 coreos-metadata[1509]: Jan 17 00:21:28.161 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 17 00:21:28.165542 coreos-metadata[1509]: Jan 17 00:21:28.164 INFO Fetch successful Jan 17 00:21:28.169096 unknown[1509]: wrote ssh authorized keys file for user: core Jan 17 00:21:28.243459 update-ssh-keys[1527]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:21:28.248714 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:21:28.269735 systemd[1]: Finished sshkeys.service. Jan 17 00:21:28.339003 containerd[1466]: time="2026-01-17T00:21:28.336753736Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:21:28.453447 containerd[1466]: time="2026-01-17T00:21:28.452733373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:28.462020 containerd[1466]: time="2026-01-17T00:21:28.460583052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:28.462020 containerd[1466]: time="2026-01-17T00:21:28.460667557Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:21:28.462020 containerd[1466]: time="2026-01-17T00:21:28.460701988Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:21:28.462020 containerd[1466]: time="2026-01-17T00:21:28.461012853Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:21:28.462020 containerd[1466]: time="2026-01-17T00:21:28.461040871Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:28.462020 containerd[1466]: time="2026-01-17T00:21:28.461132329Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:28.462020 containerd[1466]: time="2026-01-17T00:21:28.461152808Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:28.462020 containerd[1466]: time="2026-01-17T00:21:28.461452010Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:28.462020 containerd[1466]: time="2026-01-17T00:21:28.461485240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:28.462020 containerd[1466]: time="2026-01-17T00:21:28.461512430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:28.462020 containerd[1466]: time="2026-01-17T00:21:28.461530363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:28.462596 containerd[1466]: time="2026-01-17T00:21:28.461694123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:28.462596 containerd[1466]: time="2026-01-17T00:21:28.462112724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:28.462596 containerd[1466]: time="2026-01-17T00:21:28.462335852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:28.462596 containerd[1466]: time="2026-01-17T00:21:28.462364615Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:21:28.462596 containerd[1466]: time="2026-01-17T00:21:28.462520134Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:21:28.462822 containerd[1466]: time="2026-01-17T00:21:28.462615713Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:21:28.478315 containerd[1466]: time="2026-01-17T00:21:28.476095098Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:21:28.478315 containerd[1466]: time="2026-01-17T00:21:28.476228483Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:21:28.478315 containerd[1466]: time="2026-01-17T00:21:28.476259542Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:21:28.478315 containerd[1466]: time="2026-01-17T00:21:28.476288665Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:21:28.478315 containerd[1466]: time="2026-01-17T00:21:28.476318166Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:21:28.478315 containerd[1466]: time="2026-01-17T00:21:28.476596639Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:21:28.478944 containerd[1466]: time="2026-01-17T00:21:28.478283684Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:21:28.484524 containerd[1466]: time="2026-01-17T00:21:28.481227321Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:21:28.484524 containerd[1466]: time="2026-01-17T00:21:28.481308404Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:21:28.484524 containerd[1466]: time="2026-01-17T00:21:28.481354665Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:21:28.484524 containerd[1466]: time="2026-01-17T00:21:28.481384801Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:21:28.484524 containerd[1466]: time="2026-01-17T00:21:28.481431215Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:21:28.484524 containerd[1466]: time="2026-01-17T00:21:28.481456978Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:21:28.484524 containerd[1466]: time="2026-01-17T00:21:28.481505647Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:21:28.484524 containerd[1466]: time="2026-01-17T00:21:28.481533013Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:21:28.484524 containerd[1466]: time="2026-01-17T00:21:28.481556271Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:21:28.484524 containerd[1466]: time="2026-01-17T00:21:28.481598842Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:21:28.484524 containerd[1466]: time="2026-01-17T00:21:28.481620952Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:21:28.484524 containerd[1466]: time="2026-01-17T00:21:28.481674830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.484524 containerd[1466]: time="2026-01-17T00:21:28.481699092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.484524 containerd[1466]: time="2026-01-17T00:21:28.481769236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.485327 containerd[1466]: time="2026-01-17T00:21:28.481797357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.485327 containerd[1466]: time="2026-01-17T00:21:28.481850861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.485327 containerd[1466]: time="2026-01-17T00:21:28.481875345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.485327 containerd[1466]: time="2026-01-17T00:21:28.481897862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.485327 containerd[1466]: time="2026-01-17T00:21:28.481936928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.485327 containerd[1466]: time="2026-01-17T00:21:28.481959145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.485327 containerd[1466]: time="2026-01-17T00:21:28.484161804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.485327 containerd[1466]: time="2026-01-17T00:21:28.484215683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.485327 containerd[1466]: time="2026-01-17T00:21:28.484244516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.485327 containerd[1466]: time="2026-01-17T00:21:28.484288929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.485327 containerd[1466]: time="2026-01-17T00:21:28.484319678Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:21:28.485327 containerd[1466]: time="2026-01-17T00:21:28.484386055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.485327 containerd[1466]: time="2026-01-17T00:21:28.484409377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.485327 containerd[1466]: time="2026-01-17T00:21:28.484448994Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:21:28.486863 containerd[1466]: time="2026-01-17T00:21:28.486102446Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:21:28.486863 containerd[1466]: time="2026-01-17T00:21:28.486293056Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:21:28.486863 containerd[1466]: time="2026-01-17T00:21:28.486318845Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:21:28.486863 containerd[1466]: time="2026-01-17T00:21:28.486340993Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:21:28.486863 containerd[1466]: time="2026-01-17T00:21:28.486376856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.486863 containerd[1466]: time="2026-01-17T00:21:28.486402408Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:21:28.486863 containerd[1466]: time="2026-01-17T00:21:28.486422568Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:21:28.486863 containerd[1466]: time="2026-01-17T00:21:28.486458912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:21:28.491666 containerd[1466]: time="2026-01-17T00:21:28.488865661Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:21:28.491666 containerd[1466]: time="2026-01-17T00:21:28.489037083Z" level=info msg="Connect containerd service" Jan 17 00:21:28.491666 containerd[1466]: time="2026-01-17T00:21:28.489180308Z" level=info msg="using legacy CRI server" Jan 17 00:21:28.491666 containerd[1466]: time="2026-01-17T00:21:28.489195380Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:21:28.491666 containerd[1466]: time="2026-01-17T00:21:28.491029289Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:21:28.497267 containerd[1466]: time="2026-01-17T00:21:28.497013946Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:21:28.497882 containerd[1466]: time="2026-01-17T00:21:28.497460410Z" level=info msg="Start subscribing containerd event" Jan 17 00:21:28.497882 containerd[1466]: time="2026-01-17T00:21:28.497557174Z" level=info msg="Start recovering state" Jan 17 00:21:28.497882 containerd[1466]: time="2026-01-17T00:21:28.497669185Z" level=info msg="Start event monitor" Jan 17 00:21:28.497882 containerd[1466]: time="2026-01-17T00:21:28.497687150Z" level=info msg="Start snapshots syncer" Jan 17 00:21:28.497882 containerd[1466]: time="2026-01-17T00:21:28.497703272Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:21:28.497882 containerd[1466]: time="2026-01-17T00:21:28.497715757Z" level=info msg="Start streaming server" Jan 17 00:21:28.502662 containerd[1466]: time="2026-01-17T00:21:28.501944346Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:21:28.502662 containerd[1466]: time="2026-01-17T00:21:28.502087880Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:21:28.502662 containerd[1466]: time="2026-01-17T00:21:28.502574222Z" level=info msg="containerd successfully booted in 0.172667s" Jan 17 00:21:28.502378 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:21:28.745316 sshd_keygen[1446]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:21:28.838636 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:21:28.863760 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:21:28.879726 systemd[1]: Started sshd@0-10.128.0.88:22-4.153.228.146:36378.service - OpenSSH per-connection server daemon (4.153.228.146:36378). Jan 17 00:21:28.927785 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:21:28.928265 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:21:28.949204 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:21:28.996712 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:21:29.021701 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:21:29.041693 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:21:29.053555 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:21:29.297221 sshd[1543]: Accepted publickey for core from 4.153.228.146 port 36378 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:21:29.301135 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:29.334660 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:21:29.355915 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:21:29.363606 tar[1454]: linux-amd64/README.md Jan 17 00:21:29.393901 systemd-logind[1438]: New session 1 of user core. Jan 17 00:21:29.397744 instance-setup[1512]: INFO Running google_set_multiqueue. Jan 17 00:21:29.401564 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:21:29.422205 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:21:29.435008 instance-setup[1512]: INFO Set channels for eth0 to 2. Jan 17 00:21:29.444750 instance-setup[1512]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 17 00:21:29.448784 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:21:29.450611 instance-setup[1512]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 17 00:21:29.451124 instance-setup[1512]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 17 00:21:29.455230 instance-setup[1512]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 17 00:21:29.455654 instance-setup[1512]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 17 00:21:29.460248 instance-setup[1512]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 17 00:21:29.461842 instance-setup[1512]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 17 00:21:29.464757 instance-setup[1512]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 17 00:21:29.479404 instance-setup[1512]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 17 00:21:29.487391 instance-setup[1512]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 17 00:21:29.490861 instance-setup[1512]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 17 00:21:29.490927 instance-setup[1512]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 17 00:21:29.493148 (systemd)[1574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:21:29.534658 init.sh[1505]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 17 00:21:29.739890 systemd[1574]: Queued start job for default target default.target. Jan 17 00:21:29.746827 systemd[1574]: Created slice app.slice - User Application Slice. Jan 17 00:21:29.748940 systemd[1574]: Reached target paths.target - Paths. Jan 17 00:21:29.749005 systemd[1574]: Reached target timers.target - Timers. Jan 17 00:21:29.753198 systemd[1574]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:21:29.797513 systemd[1574]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:21:29.798113 systemd[1574]: Reached target sockets.target - Sockets. Jan 17 00:21:29.798167 systemd[1574]: Reached target basic.target - Basic System. Jan 17 00:21:29.798339 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:21:29.798560 systemd[1574]: Reached target default.target - Main User Target. Jan 17 00:21:29.798652 systemd[1574]: Startup finished in 288ms. Jan 17 00:21:29.817478 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:21:29.838002 startup-script[1591]: INFO Starting startup scripts. Jan 17 00:21:29.848021 startup-script[1591]: INFO No startup scripts found in metadata. Jan 17 00:21:29.848122 startup-script[1591]: INFO Finished running startup scripts. Jan 17 00:21:29.885216 init.sh[1505]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 17 00:21:29.886613 init.sh[1505]: + daemon_pids=() Jan 17 00:21:29.886613 init.sh[1505]: + for d in accounts clock_skew network Jan 17 00:21:29.886613 init.sh[1505]: + daemon_pids+=($!) Jan 17 00:21:29.886613 init.sh[1505]: + for d in accounts clock_skew network Jan 17 00:21:29.886613 init.sh[1505]: + daemon_pids+=($!) Jan 17 00:21:29.886613 init.sh[1505]: + for d in accounts clock_skew network Jan 17 00:21:29.887033 init.sh[1600]: + /usr/bin/google_clock_skew_daemon Jan 17 00:21:29.887741 init.sh[1599]: + /usr/bin/google_accounts_daemon Jan 17 00:21:29.890069 init.sh[1601]: + /usr/bin/google_network_daemon Jan 17 00:21:29.891382 init.sh[1505]: + daemon_pids+=($!) Jan 17 00:21:29.891382 init.sh[1505]: + NOTIFY_SOCKET=/run/systemd/notify Jan 17 00:21:29.891382 init.sh[1505]: + /usr/bin/systemd-notify --ready Jan 17 00:21:29.921351 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 17 00:21:29.936620 init.sh[1505]: + wait -n 1599 1600 1601 Jan 17 00:21:29.964424 ntpd[1426]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:58%2]:123 Jan 17 00:21:29.966137 ntpd[1426]: 17 Jan 00:21:29 ntpd[1426]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:58%2]:123 Jan 17 00:21:30.047501 systemd[1]: Started sshd@1-10.128.0.88:22-4.153.228.146:36392.service - OpenSSH per-connection server daemon (4.153.228.146:36392). Jan 17 00:21:30.410368 sshd[1605]: Accepted publickey for core from 4.153.228.146 port 36392 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:21:30.412465 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:30.431859 systemd-logind[1438]: New session 2 of user core. Jan 17 00:21:30.436315 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:21:30.509107 google-clock-skew[1600]: INFO Starting Google Clock Skew daemon. Jan 17 00:21:30.528544 google-clock-skew[1600]: INFO Clock drift token has changed: 0. Jan 17 00:21:30.543110 google-networking[1601]: INFO Starting Google Networking daemon. Jan 17 00:21:30.616826 groupadd[1617]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 17 00:21:30.620775 sshd[1605]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:30.626201 groupadd[1617]: group added to /etc/gshadow: name=google-sudoers Jan 17 00:21:30.630880 systemd[1]: sshd@1-10.128.0.88:22-4.153.228.146:36392.service: Deactivated successfully. Jan 17 00:21:30.638183 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:21:30.640569 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:21:30.643441 systemd-logind[1438]: Removed session 2. Jan 17 00:21:30.665570 systemd[1]: Started sshd@2-10.128.0.88:22-4.153.228.146:36394.service - OpenSSH per-connection server daemon (4.153.228.146:36394). Jan 17 00:21:30.716570 groupadd[1617]: new group: name=google-sudoers, GID=1000 Jan 17 00:21:30.745295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:30.758999 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:21:30.764762 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:21:30.770825 systemd[1]: Startup finished in 1.155s (kernel) + 10.535s (initrd) + 10.530s (userspace) = 22.221s. Jan 17 00:21:30.793626 google-accounts[1599]: INFO Starting Google Accounts daemon. Jan 17 00:21:30.819756 google-accounts[1599]: WARNING OS Login not installed. Jan 17 00:21:30.822102 google-accounts[1599]: INFO Creating a new user account for 0. Jan 17 00:21:30.828086 init.sh[1640]: useradd: invalid user name '0': use --badname to ignore Jan 17 00:21:30.827838 google-accounts[1599]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 17 00:21:30.920449 sshd[1624]: Accepted publickey for core from 4.153.228.146 port 36394 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:21:30.923472 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:30.932078 systemd-logind[1438]: New session 3 of user core. Jan 17 00:21:30.939327 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:21:31.001000 systemd-resolved[1348]: Clock change detected. Flushing caches. Jan 17 00:21:31.001371 google-clock-skew[1600]: INFO Synced system time with hardware clock. Jan 17 00:21:31.135989 sshd[1624]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:31.142493 systemd[1]: sshd@2-10.128.0.88:22-4.153.228.146:36394.service: Deactivated successfully. Jan 17 00:21:31.145791 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:21:31.148611 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:21:31.151621 systemd-logind[1438]: Removed session 3. Jan 17 00:21:31.868129 kubelet[1634]: E0117 00:21:31.868040 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:21:31.873210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:21:31.873516 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:21:31.874050 systemd[1]: kubelet.service: Consumed 1.442s CPU time. Jan 17 00:21:41.182362 systemd[1]: Started sshd@3-10.128.0.88:22-4.153.228.146:54502.service - OpenSSH per-connection server daemon (4.153.228.146:54502). Jan 17 00:21:41.412203 sshd[1653]: Accepted publickey for core from 4.153.228.146 port 54502 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:21:41.414546 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:41.421785 systemd-logind[1438]: New session 4 of user core. Jan 17 00:21:41.429262 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:21:41.582599 sshd[1653]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:41.588258 systemd[1]: sshd@3-10.128.0.88:22-4.153.228.146:54502.service: Deactivated successfully. Jan 17 00:21:41.591368 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:21:41.593709 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:21:41.596030 systemd-logind[1438]: Removed session 4. Jan 17 00:21:41.633432 systemd[1]: Started sshd@4-10.128.0.88:22-4.153.228.146:54508.service - OpenSSH per-connection server daemon (4.153.228.146:54508). Jan 17 00:21:41.853944 sshd[1660]: Accepted publickey for core from 4.153.228.146 port 54508 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:21:41.857152 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:41.865480 systemd-logind[1438]: New session 5 of user core. Jan 17 00:21:41.872233 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:21:41.874382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:21:41.882716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:42.025902 sshd[1660]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:42.031497 systemd[1]: sshd@4-10.128.0.88:22-4.153.228.146:54508.service: Deactivated successfully. Jan 17 00:21:42.036959 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:21:42.040960 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:21:42.043371 systemd-logind[1438]: Removed session 5. Jan 17 00:21:42.080494 systemd[1]: Started sshd@5-10.128.0.88:22-4.153.228.146:54512.service - OpenSSH per-connection server daemon (4.153.228.146:54512). Jan 17 00:21:42.258047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:42.269625 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:21:42.320642 sshd[1670]: Accepted publickey for core from 4.153.228.146 port 54512 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:21:42.324065 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:42.336933 systemd-logind[1438]: New session 6 of user core. Jan 17 00:21:42.341156 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:21:42.343744 kubelet[1677]: E0117 00:21:42.342819 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:21:42.350061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:21:42.350280 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:21:42.497977 sshd[1670]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:42.504515 systemd[1]: sshd@5-10.128.0.88:22-4.153.228.146:54512.service: Deactivated successfully. Jan 17 00:21:42.507178 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:21:42.508264 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:21:42.509948 systemd-logind[1438]: Removed session 6. Jan 17 00:21:42.544388 systemd[1]: Started sshd@6-10.128.0.88:22-4.153.228.146:54516.service - OpenSSH per-connection server daemon (4.153.228.146:54516). Jan 17 00:21:42.774176 sshd[1689]: Accepted publickey for core from 4.153.228.146 port 54516 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:21:42.776155 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:42.784601 systemd-logind[1438]: New session 7 of user core. Jan 17 00:21:42.794438 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:21:42.944576 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:21:42.945204 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:21:42.964714 sudo[1692]: pam_unix(sudo:session): session closed for user root Jan 17 00:21:42.997984 sshd[1689]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:43.003426 systemd[1]: sshd@6-10.128.0.88:22-4.153.228.146:54516.service: Deactivated successfully. Jan 17 00:21:43.006440 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:21:43.008919 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:21:43.010761 systemd-logind[1438]: Removed session 7. Jan 17 00:21:43.049698 systemd[1]: Started sshd@7-10.128.0.88:22-4.153.228.146:54522.service - OpenSSH per-connection server daemon (4.153.228.146:54522). Jan 17 00:21:43.303080 sshd[1697]: Accepted publickey for core from 4.153.228.146 port 54522 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:21:43.305728 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:43.314135 systemd-logind[1438]: New session 8 of user core. Jan 17 00:21:43.324266 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:21:43.465068 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:21:43.465653 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:21:43.472014 sudo[1701]: pam_unix(sudo:session): session closed for user root Jan 17 00:21:43.488904 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:21:43.489506 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:21:43.510474 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:21:43.525161 auditctl[1704]: No rules Jan 17 00:21:43.526981 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:21:43.527359 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:21:43.536157 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:21:43.586630 augenrules[1723]: No rules Jan 17 00:21:43.587046 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:21:43.588795 sudo[1700]: pam_unix(sudo:session): session closed for user root Jan 17 00:21:43.628461 sshd[1697]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:43.634676 systemd[1]: sshd@7-10.128.0.88:22-4.153.228.146:54522.service: Deactivated successfully. Jan 17 00:21:43.638152 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:21:43.640812 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:21:43.644097 systemd-logind[1438]: Removed session 8. Jan 17 00:21:43.675949 systemd[1]: Started sshd@8-10.128.0.88:22-4.153.228.146:54534.service - OpenSSH per-connection server daemon (4.153.228.146:54534). Jan 17 00:21:43.909542 sshd[1731]: Accepted publickey for core from 4.153.228.146 port 54534 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:21:43.912768 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:43.920953 systemd-logind[1438]: New session 9 of user core. Jan 17 00:21:43.932404 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:21:44.059369 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:21:44.060147 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:21:44.580450 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:21:44.592693 (dockerd)[1750]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:21:45.109434 dockerd[1750]: time="2026-01-17T00:21:45.109303063Z" level=info msg="Starting up" Jan 17 00:21:45.281684 dockerd[1750]: time="2026-01-17T00:21:45.281264312Z" level=info msg="Loading containers: start." Jan 17 00:21:45.492901 kernel: Initializing XFRM netlink socket Jan 17 00:21:45.636372 systemd-networkd[1346]: docker0: Link UP Jan 17 00:21:45.663752 dockerd[1750]: time="2026-01-17T00:21:45.663658429Z" level=info msg="Loading containers: done." Jan 17 00:21:45.689600 dockerd[1750]: time="2026-01-17T00:21:45.689510737Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:21:45.689952 dockerd[1750]: time="2026-01-17T00:21:45.689715694Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:21:45.690028 dockerd[1750]: time="2026-01-17T00:21:45.689954551Z" level=info msg="Daemon has completed initialization" Jan 17 00:21:45.743070 dockerd[1750]: time="2026-01-17T00:21:45.742865771Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:21:45.743233 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:21:46.892156 containerd[1466]: time="2026-01-17T00:21:46.892094880Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 17 00:21:47.412619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2575608052.mount: Deactivated successfully. Jan 17 00:21:49.220895 containerd[1466]: time="2026-01-17T00:21:49.220793506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:49.222875 containerd[1466]: time="2026-01-17T00:21:49.222623115Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30122799" Jan 17 00:21:49.224944 containerd[1466]: time="2026-01-17T00:21:49.224851746Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:49.229482 containerd[1466]: time="2026-01-17T00:21:49.229383283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:49.231152 containerd[1466]: time="2026-01-17T00:21:49.231082964Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.338918404s" Jan 17 00:21:49.232177 containerd[1466]: time="2026-01-17T00:21:49.231405792Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 17 00:21:49.232634 containerd[1466]: time="2026-01-17T00:21:49.232498770Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 17 00:21:51.086863 containerd[1466]: time="2026-01-17T00:21:51.086765912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:51.088767 containerd[1466]: time="2026-01-17T00:21:51.088604246Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26018839" Jan 17 00:21:51.091872 containerd[1466]: time="2026-01-17T00:21:51.090413197Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:51.099717 containerd[1466]: time="2026-01-17T00:21:51.099650002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:51.101329 containerd[1466]: time="2026-01-17T00:21:51.101264145Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.868707126s" Jan 17 00:21:51.101584 containerd[1466]: time="2026-01-17T00:21:51.101552356Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 17 00:21:51.102369 containerd[1466]: time="2026-01-17T00:21:51.102311092Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 17 00:21:52.424697 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:21:52.435058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:52.731742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:52.745533 (kubelet)[1962]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:21:52.798493 containerd[1466]: time="2026-01-17T00:21:52.797915538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:52.802292 containerd[1466]: time="2026-01-17T00:21:52.801088930Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20160142" Jan 17 00:21:52.803071 containerd[1466]: time="2026-01-17T00:21:52.803015407Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:52.810699 containerd[1466]: time="2026-01-17T00:21:52.810627905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:52.812612 containerd[1466]: time="2026-01-17T00:21:52.812543369Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.709898932s" Jan 17 00:21:52.813171 containerd[1466]: time="2026-01-17T00:21:52.813132878Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 17 00:21:52.815179 containerd[1466]: time="2026-01-17T00:21:52.815131371Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 17 00:21:52.821856 kubelet[1962]: E0117 00:21:52.821766 1962 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:21:52.826589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:21:52.826910 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:21:54.013042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount450163130.mount: Deactivated successfully. Jan 17 00:21:54.863005 containerd[1466]: time="2026-01-17T00:21:54.862904143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:54.864887 containerd[1466]: time="2026-01-17T00:21:54.864559744Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31932119" Jan 17 00:21:54.866384 containerd[1466]: time="2026-01-17T00:21:54.866290656Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:54.870011 containerd[1466]: time="2026-01-17T00:21:54.869935126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:54.871383 containerd[1466]: time="2026-01-17T00:21:54.871161540Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.055696556s" Jan 17 00:21:54.871383 containerd[1466]: time="2026-01-17T00:21:54.871229803Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 17 00:21:54.872200 containerd[1466]: time="2026-01-17T00:21:54.872151603Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 17 00:21:55.279267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount79702162.mount: Deactivated successfully. Jan 17 00:21:56.704154 containerd[1466]: time="2026-01-17T00:21:56.704045389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:56.706959 containerd[1466]: time="2026-01-17T00:21:56.705965457Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20949320" Jan 17 00:21:56.712998 containerd[1466]: time="2026-01-17T00:21:56.712926738Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:56.718380 containerd[1466]: time="2026-01-17T00:21:56.718299010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:56.720651 containerd[1466]: time="2026-01-17T00:21:56.720252765Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.848046757s" Jan 17 00:21:56.720651 containerd[1466]: time="2026-01-17T00:21:56.720326965Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 17 00:21:56.722161 containerd[1466]: time="2026-01-17T00:21:56.722096424Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:21:57.131144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3076522683.mount: Deactivated successfully. Jan 17 00:21:57.141130 containerd[1466]: time="2026-01-17T00:21:57.141003933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:57.143030 containerd[1466]: time="2026-01-17T00:21:57.142905079Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322136" Jan 17 00:21:57.145080 containerd[1466]: time="2026-01-17T00:21:57.144452895Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:57.148728 containerd[1466]: time="2026-01-17T00:21:57.148631160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:57.151359 containerd[1466]: time="2026-01-17T00:21:57.151276598Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 429.114357ms" Jan 17 00:21:57.151359 containerd[1466]: time="2026-01-17T00:21:57.151352507Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:21:57.152399 containerd[1466]: time="2026-01-17T00:21:57.152045425Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 17 00:21:57.772371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1340657466.mount: Deactivated successfully. Jan 17 00:21:58.182307 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 00:22:00.282467 containerd[1466]: time="2026-01-17T00:22:00.282380197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:00.284420 containerd[1466]: time="2026-01-17T00:22:00.284325221Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58933254" Jan 17 00:22:00.285502 containerd[1466]: time="2026-01-17T00:22:00.285442905Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:00.291805 containerd[1466]: time="2026-01-17T00:22:00.291736338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:00.295867 containerd[1466]: time="2026-01-17T00:22:00.293741044Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.141570136s" Jan 17 00:22:00.295867 containerd[1466]: time="2026-01-17T00:22:00.293845598Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 17 00:22:02.924292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:22:02.932338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:03.423236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:03.435453 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:22:03.531661 kubelet[2119]: E0117 00:22:03.531563 2119 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:22:03.536908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:22:03.537190 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:22:04.986373 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:04.994393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:05.065607 systemd[1]: Reloading requested from client PID 2134 ('systemctl') (unit session-9.scope)... Jan 17 00:22:05.065640 systemd[1]: Reloading... Jan 17 00:22:05.277866 zram_generator::config[2177]: No configuration found. Jan 17 00:22:05.446355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:22:05.559408 systemd[1]: Reloading finished in 492 ms. Jan 17 00:22:05.650749 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:22:05.650961 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:22:05.651544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:05.658409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:05.974348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:05.988666 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:22:06.055857 kubelet[2226]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:22:06.055857 kubelet[2226]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:22:06.055857 kubelet[2226]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:22:06.055857 kubelet[2226]: I0117 00:22:06.054575 2226 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:22:07.117108 kubelet[2226]: I0117 00:22:07.116956 2226 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:22:07.117108 kubelet[2226]: I0117 00:22:07.117006 2226 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:22:07.117820 kubelet[2226]: I0117 00:22:07.117435 2226 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:22:07.166407 kubelet[2226]: E0117 00:22:07.166327 2226 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.88:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:22:07.169132 kubelet[2226]: I0117 00:22:07.169065 2226 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:22:07.178282 kubelet[2226]: E0117 00:22:07.178226 2226 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:22:07.178282 kubelet[2226]: I0117 00:22:07.178279 2226 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:22:07.183428 kubelet[2226]: I0117 00:22:07.183365 2226 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:22:07.183922 kubelet[2226]: I0117 00:22:07.183864 2226 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:22:07.184195 kubelet[2226]: I0117 00:22:07.183912 2226 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:22:07.184402 kubelet[2226]: I0117 00:22:07.184197 2226 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:22:07.184402 kubelet[2226]: I0117 00:22:07.184217 2226 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:22:07.186048 kubelet[2226]: I0117 00:22:07.185989 2226 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:22:07.190856 kubelet[2226]: I0117 00:22:07.190773 2226 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:22:07.191923 kubelet[2226]: I0117 00:22:07.190873 2226 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:22:07.191923 kubelet[2226]: I0117 00:22:07.190926 2226 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:22:07.191923 kubelet[2226]: I0117 00:22:07.190952 2226 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:22:07.209495 kubelet[2226]: E0117 00:22:07.209396 2226 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.88:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:22:07.209695 kubelet[2226]: E0117 00:22:07.209562 2226 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a&limit=500&resourceVersion=0\": dial tcp 10.128.0.88:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:22:07.211456 kubelet[2226]: I0117 00:22:07.211398 2226 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:22:07.215060 kubelet[2226]: I0117 00:22:07.215004 2226 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:22:07.217875 kubelet[2226]: W0117 00:22:07.216888 2226 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:22:07.237149 kubelet[2226]: I0117 00:22:07.237073 2226 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:22:07.237921 kubelet[2226]: I0117 00:22:07.237215 2226 server.go:1289] "Started kubelet" Jan 17 00:22:07.241250 kubelet[2226]: I0117 00:22:07.241151 2226 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:22:07.242911 kubelet[2226]: I0117 00:22:07.242872 2226 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:22:07.248509 kubelet[2226]: I0117 00:22:07.248426 2226 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:22:07.249867 kubelet[2226]: I0117 00:22:07.249054 2226 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:22:07.249867 kubelet[2226]: I0117 00:22:07.249394 2226 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:22:07.253862 kubelet[2226]: E0117 00:22:07.251193 2226 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.88:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.88:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a.188b5cd0086997b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a,UID:ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a,},FirstTimestamp:2026-01-17 00:22:07.237142457 +0000 UTC m=+1.241135518,LastTimestamp:2026-01-17 00:22:07.237142457 +0000 UTC m=+1.241135518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a,}" Jan 17 00:22:07.254173 kubelet[2226]: I0117 00:22:07.254140 2226 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:22:07.257796 kubelet[2226]: E0117 00:22:07.257747 2226 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" not found" Jan 17 00:22:07.257796 kubelet[2226]: I0117 00:22:07.257810 2226 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:22:07.259700 kubelet[2226]: I0117 00:22:07.258172 2226 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:22:07.259700 kubelet[2226]: I0117 00:22:07.258279 2226 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:22:07.259700 kubelet[2226]: E0117 00:22:07.258998 2226 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.88:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:22:07.259700 kubelet[2226]: I0117 00:22:07.259498 2226 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:22:07.259700 kubelet[2226]: I0117 00:22:07.259593 2226 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:22:07.260498 kubelet[2226]: E0117 00:22:07.260438 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a?timeout=10s\": dial tcp 10.128.0.88:6443: connect: connection refused" interval="200ms" Jan 17 00:22:07.261837 kubelet[2226]: E0117 00:22:07.261783 2226 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:22:07.262943 kubelet[2226]: I0117 00:22:07.262912 2226 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:22:07.289384 kubelet[2226]: I0117 00:22:07.289039 2226 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:22:07.297883 kubelet[2226]: I0117 00:22:07.297284 2226 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:22:07.297883 kubelet[2226]: I0117 00:22:07.297327 2226 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:22:07.297883 kubelet[2226]: I0117 00:22:07.297364 2226 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:22:07.297883 kubelet[2226]: I0117 00:22:07.297391 2226 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:22:07.297883 kubelet[2226]: E0117 00:22:07.297461 2226 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:22:07.303914 kubelet[2226]: E0117 00:22:07.303557 2226 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.88:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:22:07.308389 kubelet[2226]: I0117 00:22:07.307977 2226 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:22:07.308389 kubelet[2226]: I0117 00:22:07.308003 2226 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:22:07.308389 kubelet[2226]: I0117 00:22:07.308032 2226 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:22:07.311505 kubelet[2226]: I0117 00:22:07.311113 2226 policy_none.go:49] "None policy: Start" Jan 17 00:22:07.311505 kubelet[2226]: I0117 00:22:07.311150 2226 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:22:07.311505 kubelet[2226]: I0117 00:22:07.311168 2226 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:22:07.320972 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:22:07.341494 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:22:07.347715 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:22:07.358876 kubelet[2226]: E0117 00:22:07.358717 2226 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" not found" Jan 17 00:22:07.361363 kubelet[2226]: E0117 00:22:07.361321 2226 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:22:07.362051 kubelet[2226]: I0117 00:22:07.362011 2226 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:22:07.362110 kubelet[2226]: I0117 00:22:07.362030 2226 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:22:07.362613 kubelet[2226]: I0117 00:22:07.362585 2226 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:22:07.366223 kubelet[2226]: E0117 00:22:07.365369 2226 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:22:07.366223 kubelet[2226]: E0117 00:22:07.365487 2226 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" not found" Jan 17 00:22:07.423225 systemd[1]: Created slice kubepods-burstable-podf3997661cfbff51d76f0070932e13138.slice - libcontainer container kubepods-burstable-podf3997661cfbff51d76f0070932e13138.slice. Jan 17 00:22:07.431535 kubelet[2226]: E0117 00:22:07.431485 2226 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" not found" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:07.438934 systemd[1]: Created slice kubepods-burstable-pod989c77c0bec5aa9adddc44480e7d3e82.slice - libcontainer container kubepods-burstable-pod989c77c0bec5aa9adddc44480e7d3e82.slice. Jan 17 00:22:07.443547 kubelet[2226]: E0117 00:22:07.443013 2226 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" not found" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:07.462449 kubelet[2226]: E0117 00:22:07.462010 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a?timeout=10s\": dial tcp 10.128.0.88:6443: connect: connection refused" interval="400ms" Jan 17 00:22:07.462107 systemd[1]: Created slice kubepods-burstable-podce90fdbb407d3420d9396d5ce404962e.slice - libcontainer container kubepods-burstable-podce90fdbb407d3420d9396d5ce404962e.slice. Jan 17 00:22:07.471886 kubelet[2226]: I0117 00:22:07.470508 2226 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:07.471886 kubelet[2226]: E0117 00:22:07.471302 2226 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" not found" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:07.471886 kubelet[2226]: E0117 00:22:07.471467 2226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.88:6443/api/v1/nodes\": dial tcp 10.128.0.88:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:07.559601 kubelet[2226]: I0117 00:22:07.559505 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce90fdbb407d3420d9396d5ce404962e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"ce90fdbb407d3420d9396d5ce404962e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:07.559601 kubelet[2226]: I0117 00:22:07.559577 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce90fdbb407d3420d9396d5ce404962e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"ce90fdbb407d3420d9396d5ce404962e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:07.559979 kubelet[2226]: I0117 00:22:07.559614 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce90fdbb407d3420d9396d5ce404962e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"ce90fdbb407d3420d9396d5ce404962e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:07.559979 kubelet[2226]: I0117 00:22:07.559662 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce90fdbb407d3420d9396d5ce404962e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"ce90fdbb407d3420d9396d5ce404962e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:07.559979 kubelet[2226]: I0117 00:22:07.559696 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3997661cfbff51d76f0070932e13138-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"f3997661cfbff51d76f0070932e13138\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:07.559979 kubelet[2226]: I0117 00:22:07.559728 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/989c77c0bec5aa9adddc44480e7d3e82-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"989c77c0bec5aa9adddc44480e7d3e82\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:07.560147 kubelet[2226]: I0117 00:22:07.559755 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/989c77c0bec5aa9adddc44480e7d3e82-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"989c77c0bec5aa9adddc44480e7d3e82\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:07.560147 kubelet[2226]: I0117 00:22:07.559781 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/989c77c0bec5aa9adddc44480e7d3e82-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"989c77c0bec5aa9adddc44480e7d3e82\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:07.560147 kubelet[2226]: I0117 00:22:07.559810 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce90fdbb407d3420d9396d5ce404962e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"ce90fdbb407d3420d9396d5ce404962e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:07.682400 kubelet[2226]: I0117 00:22:07.682220 2226 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:07.682860 kubelet[2226]: E0117 00:22:07.682759 2226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.88:6443/api/v1/nodes\": dial tcp 10.128.0.88:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:07.733973 containerd[1466]: time="2026-01-17T00:22:07.733896634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a,Uid:f3997661cfbff51d76f0070932e13138,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:07.749624 containerd[1466]: time="2026-01-17T00:22:07.749527983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a,Uid:989c77c0bec5aa9adddc44480e7d3e82,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:07.772693 containerd[1466]: time="2026-01-17T00:22:07.772602630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a,Uid:ce90fdbb407d3420d9396d5ce404962e,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:07.865070 kubelet[2226]: E0117 00:22:07.864960 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a?timeout=10s\": dial tcp 10.128.0.88:6443: connect: connection refused" interval="800ms" Jan 17 00:22:08.089559 kubelet[2226]: I0117 00:22:08.089401 2226 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:08.090000 kubelet[2226]: E0117 00:22:08.089918 2226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.88:6443/api/v1/nodes\": dial tcp 10.128.0.88:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:08.114597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2387316571.mount: Deactivated successfully. Jan 17 00:22:08.129880 containerd[1466]: time="2026-01-17T00:22:08.128458645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:22:08.130729 containerd[1466]: time="2026-01-17T00:22:08.130644351Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:22:08.132289 containerd[1466]: time="2026-01-17T00:22:08.132196337Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:22:08.133215 containerd[1466]: time="2026-01-17T00:22:08.133125728Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:22:08.134905 containerd[1466]: time="2026-01-17T00:22:08.134813563Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:22:08.137079 containerd[1466]: time="2026-01-17T00:22:08.136931154Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:22:08.138178 containerd[1466]: time="2026-01-17T00:22:08.138078742Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313054" Jan 17 00:22:08.148875 containerd[1466]: time="2026-01-17T00:22:08.147008937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:22:08.150896 containerd[1466]: time="2026-01-17T00:22:08.150799065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 401.147683ms" Jan 17 00:22:08.156093 containerd[1466]: time="2026-01-17T00:22:08.156020684Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 422.011745ms" Jan 17 00:22:08.162804 containerd[1466]: time="2026-01-17T00:22:08.162723195Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 389.998514ms" Jan 17 00:22:08.391803 kubelet[2226]: E0117 00:22:08.391620 2226 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.88:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:22:08.414388 containerd[1466]: time="2026-01-17T00:22:08.414185371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:08.414388 containerd[1466]: time="2026-01-17T00:22:08.414272391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:08.414388 containerd[1466]: time="2026-01-17T00:22:08.414309959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:08.414791 containerd[1466]: time="2026-01-17T00:22:08.414449336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:08.415948 containerd[1466]: time="2026-01-17T00:22:08.415805199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:08.416162 containerd[1466]: time="2026-01-17T00:22:08.415940040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:08.416162 containerd[1466]: time="2026-01-17T00:22:08.415981631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:08.416162 containerd[1466]: time="2026-01-17T00:22:08.416118220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:08.425678 containerd[1466]: time="2026-01-17T00:22:08.425340420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:08.425678 containerd[1466]: time="2026-01-17T00:22:08.425582954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:08.425678 containerd[1466]: time="2026-01-17T00:22:08.425607097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:08.429606 containerd[1466]: time="2026-01-17T00:22:08.429298132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:08.480175 systemd[1]: Started cri-containerd-5353d4d5abc9e4e2fef73d97968a9f29532d1ea3a3a73f941186589b63d5b069.scope - libcontainer container 5353d4d5abc9e4e2fef73d97968a9f29532d1ea3a3a73f941186589b63d5b069. Jan 17 00:22:08.490912 systemd[1]: Started cri-containerd-16ea30b6081abd5a9b9a51a8b4ae94ad3d024ca1ae0596c7aa5d92fe45d879e9.scope - libcontainer container 16ea30b6081abd5a9b9a51a8b4ae94ad3d024ca1ae0596c7aa5d92fe45d879e9. Jan 17 00:22:08.495164 systemd[1]: Started cri-containerd-876158946db7090e53034db54d3bc6acc32e808be3573a9e1a0d3a122c5b6239.scope - libcontainer container 876158946db7090e53034db54d3bc6acc32e808be3573a9e1a0d3a122c5b6239. Jan 17 00:22:08.554314 kubelet[2226]: E0117 00:22:08.553768 2226 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.88:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:22:08.629149 containerd[1466]: time="2026-01-17T00:22:08.628874595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a,Uid:989c77c0bec5aa9adddc44480e7d3e82,Namespace:kube-system,Attempt:0,} returns sandbox id \"16ea30b6081abd5a9b9a51a8b4ae94ad3d024ca1ae0596c7aa5d92fe45d879e9\"" Jan 17 00:22:08.630866 containerd[1466]: time="2026-01-17T00:22:08.630263991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a,Uid:f3997661cfbff51d76f0070932e13138,Namespace:kube-system,Attempt:0,} returns sandbox id \"5353d4d5abc9e4e2fef73d97968a9f29532d1ea3a3a73f941186589b63d5b069\"" Jan 17 00:22:08.637414 kubelet[2226]: E0117 00:22:08.637360 2226 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7" Jan 17 00:22:08.640105 kubelet[2226]: E0117 00:22:08.640059 2226 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7" Jan 17 00:22:08.646329 containerd[1466]: time="2026-01-17T00:22:08.646152870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a,Uid:ce90fdbb407d3420d9396d5ce404962e,Namespace:kube-system,Attempt:0,} returns sandbox id \"876158946db7090e53034db54d3bc6acc32e808be3573a9e1a0d3a122c5b6239\"" Jan 17 00:22:08.647813 containerd[1466]: time="2026-01-17T00:22:08.647754717Z" level=info msg="CreateContainer within sandbox \"16ea30b6081abd5a9b9a51a8b4ae94ad3d024ca1ae0596c7aa5d92fe45d879e9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:22:08.651725 kubelet[2226]: E0117 00:22:08.651262 2226 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973d" Jan 17 00:22:08.652404 containerd[1466]: time="2026-01-17T00:22:08.652350949Z" level=info msg="CreateContainer within sandbox \"5353d4d5abc9e4e2fef73d97968a9f29532d1ea3a3a73f941186589b63d5b069\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:22:08.653208 kubelet[2226]: E0117 00:22:08.653157 2226 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.88:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:22:08.657324 containerd[1466]: time="2026-01-17T00:22:08.657142038Z" level=info msg="CreateContainer within sandbox \"876158946db7090e53034db54d3bc6acc32e808be3573a9e1a0d3a122c5b6239\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:22:08.666872 kubelet[2226]: E0117 00:22:08.666776 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a?timeout=10s\": dial tcp 10.128.0.88:6443: connect: connection refused" interval="1.6s" Jan 17 00:22:08.676710 containerd[1466]: time="2026-01-17T00:22:08.676334957Z" level=info msg="CreateContainer within sandbox \"5353d4d5abc9e4e2fef73d97968a9f29532d1ea3a3a73f941186589b63d5b069\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6aad083095eda70ba2bd98a6fac84bc2caaded52394ce34157c065467cb24c1c\"" Jan 17 00:22:08.678957 containerd[1466]: time="2026-01-17T00:22:08.678897514Z" level=info msg="StartContainer for \"6aad083095eda70ba2bd98a6fac84bc2caaded52394ce34157c065467cb24c1c\"" Jan 17 00:22:08.688219 containerd[1466]: time="2026-01-17T00:22:08.688001781Z" level=info msg="CreateContainer within sandbox \"876158946db7090e53034db54d3bc6acc32e808be3573a9e1a0d3a122c5b6239\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4bb04cda3bca90a3bc71bb6c9e2ef63c8d0819d466f181074a8535b0ca18e14a\"" Jan 17 00:22:08.691870 containerd[1466]: time="2026-01-17T00:22:08.691238110Z" level=info msg="StartContainer for \"4bb04cda3bca90a3bc71bb6c9e2ef63c8d0819d466f181074a8535b0ca18e14a\"" Jan 17 00:22:08.696883 containerd[1466]: time="2026-01-17T00:22:08.696796418Z" level=info msg="CreateContainer within sandbox \"16ea30b6081abd5a9b9a51a8b4ae94ad3d024ca1ae0596c7aa5d92fe45d879e9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dce82fb15c7e6d94c3d843b143d8a49715766cbca0ac88f5653f13e9b670820b\"" Jan 17 00:22:08.698297 containerd[1466]: time="2026-01-17T00:22:08.698232649Z" level=info msg="StartContainer for \"dce82fb15c7e6d94c3d843b143d8a49715766cbca0ac88f5653f13e9b670820b\"" Jan 17 00:22:08.754289 systemd[1]: Started cri-containerd-6aad083095eda70ba2bd98a6fac84bc2caaded52394ce34157c065467cb24c1c.scope - libcontainer container 6aad083095eda70ba2bd98a6fac84bc2caaded52394ce34157c065467cb24c1c. Jan 17 00:22:08.780112 systemd[1]: Started cri-containerd-4bb04cda3bca90a3bc71bb6c9e2ef63c8d0819d466f181074a8535b0ca18e14a.scope - libcontainer container 4bb04cda3bca90a3bc71bb6c9e2ef63c8d0819d466f181074a8535b0ca18e14a. Jan 17 00:22:08.796216 systemd[1]: Started cri-containerd-dce82fb15c7e6d94c3d843b143d8a49715766cbca0ac88f5653f13e9b670820b.scope - libcontainer container dce82fb15c7e6d94c3d843b143d8a49715766cbca0ac88f5653f13e9b670820b. Jan 17 00:22:08.809666 kubelet[2226]: E0117 00:22:08.809547 2226 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a&limit=500&resourceVersion=0\": dial tcp 10.128.0.88:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:22:08.898141 containerd[1466]: time="2026-01-17T00:22:08.896930869Z" level=info msg="StartContainer for \"6aad083095eda70ba2bd98a6fac84bc2caaded52394ce34157c065467cb24c1c\" returns successfully" Jan 17 00:22:08.898687 kubelet[2226]: I0117 00:22:08.898666 2226 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:08.900909 kubelet[2226]: E0117 00:22:08.899553 2226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.88:6443/api/v1/nodes\": dial tcp 10.128.0.88:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:08.953866 containerd[1466]: time="2026-01-17T00:22:08.952214834Z" level=info msg="StartContainer for \"4bb04cda3bca90a3bc71bb6c9e2ef63c8d0819d466f181074a8535b0ca18e14a\" returns successfully" Jan 17 00:22:08.960689 containerd[1466]: time="2026-01-17T00:22:08.960379664Z" level=info msg="StartContainer for \"dce82fb15c7e6d94c3d843b143d8a49715766cbca0ac88f5653f13e9b670820b\" returns successfully" Jan 17 00:22:09.321868 kubelet[2226]: E0117 00:22:09.318905 2226 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" not found" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:09.327142 kubelet[2226]: E0117 00:22:09.326399 2226 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" not found" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:09.334543 kubelet[2226]: E0117 00:22:09.334497 2226 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" not found" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:10.335105 kubelet[2226]: E0117 00:22:10.334786 2226 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" not found" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:10.339011 kubelet[2226]: E0117 00:22:10.338522 2226 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" not found" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:10.339869 kubelet[2226]: E0117 00:22:10.339622 2226 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" not found" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:10.505900 kubelet[2226]: I0117 00:22:10.505723 2226 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:12.359203 kubelet[2226]: E0117 00:22:12.359093 2226 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" not found" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:12.425368 kubelet[2226]: I0117 00:22:12.424814 2226 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:12.459996 kubelet[2226]: I0117 00:22:12.459945 2226 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:12.495034 kubelet[2226]: E0117 00:22:12.494963 2226 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:12.495034 kubelet[2226]: I0117 00:22:12.495031 2226 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:12.499665 kubelet[2226]: E0117 00:22:12.499585 2226 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:12.499665 kubelet[2226]: I0117 00:22:12.499661 2226 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:12.524146 kubelet[2226]: E0117 00:22:12.524064 2226 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:12.592784 update_engine[1440]: I20260117 00:22:12.592677 1440 update_attempter.cc:509] Updating boot flags... Jan 17 00:22:12.686475 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2521) Jan 17 00:22:12.817895 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2520) Jan 17 00:22:13.202454 kubelet[2226]: I0117 00:22:13.202045 2226 apiserver.go:52] "Watching apiserver" Jan 17 00:22:13.258981 kubelet[2226]: I0117 00:22:13.258907 2226 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:22:14.983866 systemd[1]: Reloading requested from client PID 2532 ('systemctl') (unit session-9.scope)... Jan 17 00:22:14.983892 systemd[1]: Reloading... Jan 17 00:22:15.178881 zram_generator::config[2578]: No configuration found. Jan 17 00:22:15.329478 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:22:15.475369 systemd[1]: Reloading finished in 490 ms. Jan 17 00:22:15.538200 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:15.550212 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:22:15.550626 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:15.550735 systemd[1]: kubelet.service: Consumed 1.899s CPU time, 132.8M memory peak, 0B memory swap peak. Jan 17 00:22:15.559976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:15.926184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:15.940639 (kubelet)[2620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:22:16.031859 kubelet[2620]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:22:16.031859 kubelet[2620]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:22:16.031859 kubelet[2620]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:22:16.031859 kubelet[2620]: I0117 00:22:16.031102 2620 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:22:16.051204 kubelet[2620]: I0117 00:22:16.050960 2620 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:22:16.051204 kubelet[2620]: I0117 00:22:16.051012 2620 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:22:16.051500 kubelet[2620]: I0117 00:22:16.051417 2620 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:22:16.055536 kubelet[2620]: I0117 00:22:16.053925 2620 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:22:16.058999 kubelet[2620]: I0117 00:22:16.058952 2620 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:22:16.070223 kubelet[2620]: E0117 00:22:16.070056 2620 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:22:16.070932 kubelet[2620]: I0117 00:22:16.070645 2620 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:22:16.084475 kubelet[2620]: I0117 00:22:16.084297 2620 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:22:16.087532 kubelet[2620]: I0117 00:22:16.085942 2620 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:22:16.087532 kubelet[2620]: I0117 00:22:16.086018 2620 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:22:16.087532 kubelet[2620]: I0117 00:22:16.086296 2620 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:22:16.087532 kubelet[2620]: I0117 00:22:16.086314 2620 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:22:16.088401 kubelet[2620]: I0117 00:22:16.086388 2620 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:22:16.093691 kubelet[2620]: I0117 00:22:16.088548 2620 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:22:16.094030 kubelet[2620]: I0117 00:22:16.094001 2620 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:22:16.094179 kubelet[2620]: I0117 00:22:16.094165 2620 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:22:16.094374 kubelet[2620]: I0117 00:22:16.094358 2620 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:22:16.105134 sudo[2634]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 00:22:16.105795 sudo[2634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 00:22:16.115093 kubelet[2620]: I0117 00:22:16.113614 2620 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:22:16.116034 kubelet[2620]: I0117 00:22:16.115421 2620 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:22:16.156721 kubelet[2620]: I0117 00:22:16.156678 2620 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:22:16.157480 kubelet[2620]: I0117 00:22:16.156897 2620 server.go:1289] "Started kubelet" Jan 17 00:22:16.159495 kubelet[2620]: I0117 00:22:16.159328 2620 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:22:16.165793 kubelet[2620]: I0117 00:22:16.163527 2620 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:22:16.171248 kubelet[2620]: I0117 00:22:16.171199 2620 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:22:16.185077 kubelet[2620]: I0117 00:22:16.183994 2620 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:22:16.193640 kubelet[2620]: I0117 00:22:16.193585 2620 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:22:16.196416 kubelet[2620]: I0117 00:22:16.196301 2620 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:22:16.199997 kubelet[2620]: I0117 00:22:16.199957 2620 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:22:16.205700 kubelet[2620]: I0117 00:22:16.204748 2620 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:22:16.207985 kubelet[2620]: I0117 00:22:16.206085 2620 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:22:16.211508 kubelet[2620]: I0117 00:22:16.211458 2620 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:22:16.211695 kubelet[2620]: I0117 00:22:16.211648 2620 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:22:16.217235 kubelet[2620]: I0117 00:22:16.217135 2620 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:22:16.221749 kubelet[2620]: I0117 00:22:16.220689 2620 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:22:16.221749 kubelet[2620]: I0117 00:22:16.220785 2620 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:22:16.221749 kubelet[2620]: I0117 00:22:16.220881 2620 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:22:16.221749 kubelet[2620]: I0117 00:22:16.220896 2620 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:22:16.221749 kubelet[2620]: E0117 00:22:16.221125 2620 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:22:16.234817 kubelet[2620]: I0117 00:22:16.234171 2620 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:22:16.248279 kubelet[2620]: E0117 00:22:16.246345 2620 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:22:16.321855 kubelet[2620]: E0117 00:22:16.321777 2620 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:22:16.376505 kubelet[2620]: I0117 00:22:16.376116 2620 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:22:16.376505 kubelet[2620]: I0117 00:22:16.376144 2620 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:22:16.376505 kubelet[2620]: I0117 00:22:16.376179 2620 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:22:16.376505 kubelet[2620]: I0117 00:22:16.376397 2620 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:22:16.376505 kubelet[2620]: I0117 00:22:16.376414 2620 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:22:16.377655 kubelet[2620]: I0117 00:22:16.377260 2620 policy_none.go:49] "None policy: Start" Jan 17 00:22:16.377655 kubelet[2620]: I0117 00:22:16.377295 2620 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:22:16.377655 kubelet[2620]: I0117 00:22:16.377319 2620 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:22:16.377655 kubelet[2620]: I0117 00:22:16.377536 2620 state_mem.go:75] "Updated machine memory state" Jan 17 00:22:16.390947 kubelet[2620]: E0117 00:22:16.389395 2620 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:22:16.390947 kubelet[2620]: I0117 00:22:16.389707 2620 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:22:16.390947 kubelet[2620]: I0117 00:22:16.389732 2620 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:22:16.390947 kubelet[2620]: I0117 00:22:16.390655 2620 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:22:16.409479 kubelet[2620]: E0117 00:22:16.409265 2620 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:22:16.513762 kubelet[2620]: I0117 00:22:16.512129 2620 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:16.527864 kubelet[2620]: I0117 00:22:16.526020 2620 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:16.532450 kubelet[2620]: I0117 00:22:16.530389 2620 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:16.532450 kubelet[2620]: I0117 00:22:16.531032 2620 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:16.549865 kubelet[2620]: I0117 00:22:16.548936 2620 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:16.550296 kubelet[2620]: I0117 00:22:16.550272 2620 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:16.565274 kubelet[2620]: I0117 00:22:16.565238 2620 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jan 17 00:22:16.567267 kubelet[2620]: I0117 00:22:16.566875 2620 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jan 17 00:22:16.568251 kubelet[2620]: I0117 00:22:16.568113 2620 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jan 17 00:22:16.610558 kubelet[2620]: I0117 00:22:16.610067 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/989c77c0bec5aa9adddc44480e7d3e82-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"989c77c0bec5aa9adddc44480e7d3e82\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:16.610558 kubelet[2620]: I0117 00:22:16.610135 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/989c77c0bec5aa9adddc44480e7d3e82-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"989c77c0bec5aa9adddc44480e7d3e82\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:16.610558 kubelet[2620]: I0117 00:22:16.610177 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce90fdbb407d3420d9396d5ce404962e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"ce90fdbb407d3420d9396d5ce404962e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:16.610558 kubelet[2620]: I0117 00:22:16.610212 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce90fdbb407d3420d9396d5ce404962e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"ce90fdbb407d3420d9396d5ce404962e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:16.610938 kubelet[2620]: I0117 00:22:16.610246 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce90fdbb407d3420d9396d5ce404962e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"ce90fdbb407d3420d9396d5ce404962e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:16.610938 kubelet[2620]: I0117 00:22:16.610280 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/989c77c0bec5aa9adddc44480e7d3e82-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"989c77c0bec5aa9adddc44480e7d3e82\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:16.610938 kubelet[2620]: I0117 00:22:16.610325 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce90fdbb407d3420d9396d5ce404962e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"ce90fdbb407d3420d9396d5ce404962e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:16.610938 kubelet[2620]: I0117 00:22:16.610369 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce90fdbb407d3420d9396d5ce404962e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"ce90fdbb407d3420d9396d5ce404962e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:16.611154 kubelet[2620]: I0117 00:22:16.610401 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3997661cfbff51d76f0070932e13138-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" (UID: \"f3997661cfbff51d76f0070932e13138\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:17.125950 kubelet[2620]: I0117 00:22:17.125878 2620 apiserver.go:52] "Watching apiserver" Jan 17 00:22:17.128618 sudo[2634]: pam_unix(sudo:session): session closed for user root Jan 17 00:22:17.195913 kubelet[2620]: I0117 00:22:17.195645 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" podStartSLOduration=1.19562024 podStartE2EDuration="1.19562024s" podCreationTimestamp="2026-01-17 00:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:17.178684826 +0000 UTC m=+1.224483537" watchObservedRunningTime="2026-01-17 00:22:17.19562024 +0000 UTC m=+1.241418947" Jan 17 00:22:17.206165 kubelet[2620]: I0117 00:22:17.206068 2620 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:22:17.213206 kubelet[2620]: I0117 00:22:17.212998 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" podStartSLOduration=1.212970968 podStartE2EDuration="1.212970968s" podCreationTimestamp="2026-01-17 00:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:17.19842321 +0000 UTC m=+1.244221917" watchObservedRunningTime="2026-01-17 00:22:17.212970968 +0000 UTC m=+1.258769678" Jan 17 00:22:17.249589 kubelet[2620]: I0117 00:22:17.249508 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" podStartSLOduration=1.249486197 podStartE2EDuration="1.249486197s" podCreationTimestamp="2026-01-17 00:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:17.215189028 +0000 UTC m=+1.260987742" watchObservedRunningTime="2026-01-17 00:22:17.249486197 +0000 UTC m=+1.295284906" Jan 17 00:22:17.296319 kubelet[2620]: I0117 00:22:17.295368 2620 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:17.320856 kubelet[2620]: I0117 00:22:17.320065 2620 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jan 17 00:22:17.321402 kubelet[2620]: E0117 00:22:17.321357 2620 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a" Jan 17 00:22:20.109273 sudo[1734]: pam_unix(sudo:session): session closed for user root Jan 17 00:22:20.141696 sshd[1731]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:20.149385 systemd[1]: sshd@8-10.128.0.88:22-4.153.228.146:54534.service: Deactivated successfully. Jan 17 00:22:20.156480 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:22:20.157578 systemd[1]: session-9.scope: Consumed 8.679s CPU time, 162.5M memory peak, 0B memory swap peak. Jan 17 00:22:20.161347 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:22:20.163454 systemd-logind[1438]: Removed session 9. Jan 17 00:22:21.455646 kubelet[2620]: I0117 00:22:21.455393 2620 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:22:21.456426 containerd[1466]: time="2026-01-17T00:22:21.456117049Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:22:21.457134 kubelet[2620]: I0117 00:22:21.456515 2620 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:22:22.502205 systemd[1]: Created slice kubepods-besteffort-pod72ac7b3e_71d1_4352_a905_58cc388a3d42.slice - libcontainer container kubepods-besteffort-pod72ac7b3e_71d1_4352_a905_58cc388a3d42.slice. Jan 17 00:22:22.512468 kubelet[2620]: E0117 00:22:22.512098 2620 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Jan 17 00:22:22.516126 kubelet[2620]: E0117 00:22:22.512321 2620 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-config\"" type="*v1.ConfigMap" Jan 17 00:22:22.527726 systemd[1]: Created slice kubepods-burstable-pod29f22310_78a2_4dd2_9ed3_b7cbecd2a977.slice - libcontainer container kubepods-burstable-pod29f22310_78a2_4dd2_9ed3_b7cbecd2a977.slice. Jan 17 00:22:22.553077 kubelet[2620]: I0117 00:22:22.551759 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/72ac7b3e-71d1-4352-a905-58cc388a3d42-kube-proxy\") pod \"kube-proxy-sgw56\" (UID: \"72ac7b3e-71d1-4352-a905-58cc388a3d42\") " pod="kube-system/kube-proxy-sgw56" Jan 17 00:22:22.553077 kubelet[2620]: I0117 00:22:22.551846 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dktcb\" (UniqueName: \"kubernetes.io/projected/72ac7b3e-71d1-4352-a905-58cc388a3d42-kube-api-access-dktcb\") pod \"kube-proxy-sgw56\" (UID: \"72ac7b3e-71d1-4352-a905-58cc388a3d42\") " pod="kube-system/kube-proxy-sgw56" Jan 17 00:22:22.553077 kubelet[2620]: I0117 00:22:22.551882 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-bpf-maps\") pod \"cilium-m4fzp\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " pod="kube-system/cilium-m4fzp" Jan 17 00:22:22.553077 kubelet[2620]: I0117 00:22:22.551908 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-etc-cni-netd\") pod \"cilium-m4fzp\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " pod="kube-system/cilium-m4fzp" Jan 17 00:22:22.553077 kubelet[2620]: I0117 00:22:22.551944 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-hostproc\") pod \"cilium-m4fzp\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " pod="kube-system/cilium-m4fzp" Jan 17 00:22:22.553077 kubelet[2620]: I0117 00:22:22.551968 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-xtables-lock\") pod \"cilium-m4fzp\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " pod="kube-system/cilium-m4fzp" Jan 17 00:22:22.553584 kubelet[2620]: I0117 00:22:22.551992 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-clustermesh-secrets\") pod \"cilium-m4fzp\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " pod="kube-system/cilium-m4fzp" Jan 17 00:22:22.553584 kubelet[2620]: I0117 00:22:22.552018 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-host-proc-sys-kernel\") pod \"cilium-m4fzp\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " pod="kube-system/cilium-m4fzp" Jan 17 00:22:22.553584 kubelet[2620]: I0117 00:22:22.552047 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72ac7b3e-71d1-4352-a905-58cc388a3d42-xtables-lock\") pod \"kube-proxy-sgw56\" (UID: \"72ac7b3e-71d1-4352-a905-58cc388a3d42\") " pod="kube-system/kube-proxy-sgw56" Jan 17 00:22:22.553584 kubelet[2620]: I0117 00:22:22.552071 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cilium-cgroup\") pod \"cilium-m4fzp\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " pod="kube-system/cilium-m4fzp" Jan 17 00:22:22.553584 kubelet[2620]: I0117 00:22:22.552099 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cni-path\") pod \"cilium-m4fzp\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " pod="kube-system/cilium-m4fzp" Jan 17 00:22:22.553584 kubelet[2620]: I0117 00:22:22.552129 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-lib-modules\") pod \"cilium-m4fzp\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " pod="kube-system/cilium-m4fzp" Jan 17 00:22:22.554416 kubelet[2620]: I0117 00:22:22.552155 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cilium-config-path\") pod \"cilium-m4fzp\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " pod="kube-system/cilium-m4fzp" Jan 17 00:22:22.554416 kubelet[2620]: I0117 00:22:22.552179 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-host-proc-sys-net\") pod \"cilium-m4fzp\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " pod="kube-system/cilium-m4fzp" Jan 17 00:22:22.554416 kubelet[2620]: I0117 00:22:22.552235 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72ac7b3e-71d1-4352-a905-58cc388a3d42-lib-modules\") pod \"kube-proxy-sgw56\" (UID: \"72ac7b3e-71d1-4352-a905-58cc388a3d42\") " pod="kube-system/kube-proxy-sgw56" Jan 17 00:22:22.554416 kubelet[2620]: I0117 00:22:22.552263 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cilium-run\") pod \"cilium-m4fzp\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " pod="kube-system/cilium-m4fzp" Jan 17 00:22:22.554416 kubelet[2620]: I0117 00:22:22.552292 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-hubble-tls\") pod \"cilium-m4fzp\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " pod="kube-system/cilium-m4fzp" Jan 17 00:22:22.554416 kubelet[2620]: I0117 00:22:22.552328 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ndq8\" (UniqueName: \"kubernetes.io/projected/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-kube-api-access-4ndq8\") pod \"cilium-m4fzp\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " pod="kube-system/cilium-m4fzp" Jan 17 00:22:22.633719 kubelet[2620]: I0117 00:22:22.633195 2620 status_manager.go:895] "Failed to get status for pod" podUID="6a30224e-4681-468f-a56a-be1f49fb04e1" pod="kube-system/cilium-operator-6c4d7847fc-6ms6h" err="pods \"cilium-operator-6c4d7847fc-6ms6h\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a' and this object" Jan 17 00:22:22.645956 systemd[1]: Created slice kubepods-besteffort-pod6a30224e_4681_468f_a56a_be1f49fb04e1.slice - libcontainer container kubepods-besteffort-pod6a30224e_4681_468f_a56a_be1f49fb04e1.slice. Jan 17 00:22:22.653210 kubelet[2620]: I0117 00:22:22.653140 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw2nr\" (UniqueName: \"kubernetes.io/projected/6a30224e-4681-468f-a56a-be1f49fb04e1-kube-api-access-rw2nr\") pod \"cilium-operator-6c4d7847fc-6ms6h\" (UID: \"6a30224e-4681-468f-a56a-be1f49fb04e1\") " pod="kube-system/cilium-operator-6c4d7847fc-6ms6h" Jan 17 00:22:22.653429 kubelet[2620]: I0117 00:22:22.653406 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a30224e-4681-468f-a56a-be1f49fb04e1-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6ms6h\" (UID: \"6a30224e-4681-468f-a56a-be1f49fb04e1\") " pod="kube-system/cilium-operator-6c4d7847fc-6ms6h" Jan 17 00:22:22.818796 containerd[1466]: time="2026-01-17T00:22:22.818604052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sgw56,Uid:72ac7b3e-71d1-4352-a905-58cc388a3d42,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:22.865107 containerd[1466]: time="2026-01-17T00:22:22.864792472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:22.865955 containerd[1466]: time="2026-01-17T00:22:22.865867580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:22.866384 containerd[1466]: time="2026-01-17T00:22:22.866154226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:22.866530 containerd[1466]: time="2026-01-17T00:22:22.866361379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:22.899261 systemd[1]: Started cri-containerd-989998e0bfcf9ca6c4c67ec20b7f0a6609b6eecdf904ecab53eef10deebb2586.scope - libcontainer container 989998e0bfcf9ca6c4c67ec20b7f0a6609b6eecdf904ecab53eef10deebb2586. Jan 17 00:22:22.943117 containerd[1466]: time="2026-01-17T00:22:22.943057003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sgw56,Uid:72ac7b3e-71d1-4352-a905-58cc388a3d42,Namespace:kube-system,Attempt:0,} returns sandbox id \"989998e0bfcf9ca6c4c67ec20b7f0a6609b6eecdf904ecab53eef10deebb2586\"" Jan 17 00:22:22.952843 containerd[1466]: time="2026-01-17T00:22:22.952748527Z" level=info msg="CreateContainer within sandbox \"989998e0bfcf9ca6c4c67ec20b7f0a6609b6eecdf904ecab53eef10deebb2586\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:22:22.976770 containerd[1466]: time="2026-01-17T00:22:22.976576244Z" level=info msg="CreateContainer within sandbox \"989998e0bfcf9ca6c4c67ec20b7f0a6609b6eecdf904ecab53eef10deebb2586\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1cb1cbae2f555453d9a2195e1a572fd06ee21242675e6c1820e4ef1b3f7e360e\"" Jan 17 00:22:22.978989 containerd[1466]: time="2026-01-17T00:22:22.978917589Z" level=info msg="StartContainer for \"1cb1cbae2f555453d9a2195e1a572fd06ee21242675e6c1820e4ef1b3f7e360e\"" Jan 17 00:22:23.039172 systemd[1]: Started cri-containerd-1cb1cbae2f555453d9a2195e1a572fd06ee21242675e6c1820e4ef1b3f7e360e.scope - libcontainer container 1cb1cbae2f555453d9a2195e1a572fd06ee21242675e6c1820e4ef1b3f7e360e. Jan 17 00:22:23.088908 containerd[1466]: time="2026-01-17T00:22:23.087922966Z" level=info msg="StartContainer for \"1cb1cbae2f555453d9a2195e1a572fd06ee21242675e6c1820e4ef1b3f7e360e\" returns successfully" Jan 17 00:22:23.656575 kubelet[2620]: E0117 00:22:23.656369 2620 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 17 00:22:23.658209 kubelet[2620]: E0117 00:22:23.657930 2620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cilium-config-path podName:29f22310-78a2-4dd2-9ed3-b7cbecd2a977 nodeName:}" failed. No retries permitted until 2026-01-17 00:22:24.156543821 +0000 UTC m=+8.202342529 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cilium-config-path") pod "cilium-m4fzp" (UID: "29f22310-78a2-4dd2-9ed3-b7cbecd2a977") : failed to sync configmap cache: timed out waiting for the condition Jan 17 00:22:23.754318 kubelet[2620]: E0117 00:22:23.754263 2620 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 17 00:22:23.754539 kubelet[2620]: E0117 00:22:23.754392 2620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a30224e-4681-468f-a56a-be1f49fb04e1-cilium-config-path podName:6a30224e-4681-468f-a56a-be1f49fb04e1 nodeName:}" failed. No retries permitted until 2026-01-17 00:22:24.254364619 +0000 UTC m=+8.300163318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/6a30224e-4681-468f-a56a-be1f49fb04e1-cilium-config-path") pod "cilium-operator-6c4d7847fc-6ms6h" (UID: "6a30224e-4681-468f-a56a-be1f49fb04e1") : failed to sync configmap cache: timed out waiting for the condition Jan 17 00:22:24.336245 containerd[1466]: time="2026-01-17T00:22:24.336182224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m4fzp,Uid:29f22310-78a2-4dd2-9ed3-b7cbecd2a977,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:24.376662 containerd[1466]: time="2026-01-17T00:22:24.376332590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:24.376662 containerd[1466]: time="2026-01-17T00:22:24.376532598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:24.377116 containerd[1466]: time="2026-01-17T00:22:24.376574867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:24.377270 containerd[1466]: time="2026-01-17T00:22:24.377122207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:24.417203 systemd[1]: Started cri-containerd-7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0.scope - libcontainer container 7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0. Jan 17 00:22:24.457435 containerd[1466]: time="2026-01-17T00:22:24.456599652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6ms6h,Uid:6a30224e-4681-468f-a56a-be1f49fb04e1,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:24.459209 containerd[1466]: time="2026-01-17T00:22:24.459098473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m4fzp,Uid:29f22310-78a2-4dd2-9ed3-b7cbecd2a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0\"" Jan 17 00:22:24.465297 containerd[1466]: time="2026-01-17T00:22:24.465240533Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 00:22:24.507960 containerd[1466]: time="2026-01-17T00:22:24.507528062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:24.507960 containerd[1466]: time="2026-01-17T00:22:24.507624894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:24.507960 containerd[1466]: time="2026-01-17T00:22:24.507657736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:24.509276 containerd[1466]: time="2026-01-17T00:22:24.507873392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:24.540229 systemd[1]: Started cri-containerd-e1bf4566f623d0fb94db15304e1e2e537159e6a991e5af798636c783eb305762.scope - libcontainer container e1bf4566f623d0fb94db15304e1e2e537159e6a991e5af798636c783eb305762. Jan 17 00:22:24.610209 containerd[1466]: time="2026-01-17T00:22:24.609493898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6ms6h,Uid:6a30224e-4681-468f-a56a-be1f49fb04e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1bf4566f623d0fb94db15304e1e2e537159e6a991e5af798636c783eb305762\"" Jan 17 00:22:25.239074 kubelet[2620]: I0117 00:22:25.237743 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sgw56" podStartSLOduration=3.237712343 podStartE2EDuration="3.237712343s" podCreationTimestamp="2026-01-17 00:22:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:23.358213643 +0000 UTC m=+7.404012353" watchObservedRunningTime="2026-01-17 00:22:25.237712343 +0000 UTC m=+9.283511053" Jan 17 00:22:32.237505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4271779455.mount: Deactivated successfully. Jan 17 00:22:35.509712 containerd[1466]: time="2026-01-17T00:22:35.509622879Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:35.511660 containerd[1466]: time="2026-01-17T00:22:35.511302767Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 17 00:22:35.513944 containerd[1466]: time="2026-01-17T00:22:35.513334461Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:35.515896 containerd[1466]: time="2026-01-17T00:22:35.515789685Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.050205234s" Jan 17 00:22:35.515896 containerd[1466]: time="2026-01-17T00:22:35.515900408Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 00:22:35.519555 containerd[1466]: time="2026-01-17T00:22:35.519384227Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 00:22:35.527172 containerd[1466]: time="2026-01-17T00:22:35.527095621Z" level=info msg="CreateContainer within sandbox \"7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:22:35.563093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4278426945.mount: Deactivated successfully. Jan 17 00:22:35.568465 containerd[1466]: time="2026-01-17T00:22:35.568375007Z" level=info msg="CreateContainer within sandbox \"7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372\"" Jan 17 00:22:35.569522 containerd[1466]: time="2026-01-17T00:22:35.569371128Z" level=info msg="StartContainer for \"a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372\"" Jan 17 00:22:35.632248 systemd[1]: Started cri-containerd-a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372.scope - libcontainer container a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372. Jan 17 00:22:35.673315 containerd[1466]: time="2026-01-17T00:22:35.673249483Z" level=info msg="StartContainer for \"a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372\" returns successfully" Jan 17 00:22:35.689543 systemd[1]: cri-containerd-a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372.scope: Deactivated successfully. Jan 17 00:22:36.544468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372-rootfs.mount: Deactivated successfully. Jan 17 00:22:37.523772 containerd[1466]: time="2026-01-17T00:22:37.523653547Z" level=info msg="shim disconnected" id=a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372 namespace=k8s.io Jan 17 00:22:37.523772 containerd[1466]: time="2026-01-17T00:22:37.523754843Z" level=warning msg="cleaning up after shim disconnected" id=a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372 namespace=k8s.io Jan 17 00:22:37.523772 containerd[1466]: time="2026-01-17T00:22:37.523771299Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:38.230209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount910551171.mount: Deactivated successfully. Jan 17 00:22:38.392330 containerd[1466]: time="2026-01-17T00:22:38.392261254Z" level=info msg="CreateContainer within sandbox \"7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:22:38.441609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1230757308.mount: Deactivated successfully. Jan 17 00:22:38.453899 containerd[1466]: time="2026-01-17T00:22:38.453713726Z" level=info msg="CreateContainer within sandbox \"7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043\"" Jan 17 00:22:38.457088 containerd[1466]: time="2026-01-17T00:22:38.457010098Z" level=info msg="StartContainer for \"d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043\"" Jan 17 00:22:38.509454 systemd[1]: Started cri-containerd-d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043.scope - libcontainer container d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043. Jan 17 00:22:38.590142 containerd[1466]: time="2026-01-17T00:22:38.588455303Z" level=info msg="StartContainer for \"d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043\" returns successfully" Jan 17 00:22:38.623225 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:22:38.624652 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:22:38.624817 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:22:38.635416 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:22:38.635891 systemd[1]: cri-containerd-d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043.scope: Deactivated successfully. Jan 17 00:22:38.685655 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:22:38.753497 containerd[1466]: time="2026-01-17T00:22:38.753401726Z" level=info msg="shim disconnected" id=d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043 namespace=k8s.io Jan 17 00:22:38.753497 containerd[1466]: time="2026-01-17T00:22:38.753495406Z" level=warning msg="cleaning up after shim disconnected" id=d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043 namespace=k8s.io Jan 17 00:22:38.754254 containerd[1466]: time="2026-01-17T00:22:38.753515015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:39.399471 containerd[1466]: time="2026-01-17T00:22:39.399373629Z" level=info msg="CreateContainer within sandbox \"7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:22:39.447620 containerd[1466]: time="2026-01-17T00:22:39.447533048Z" level=info msg="CreateContainer within sandbox \"7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8\"" Jan 17 00:22:39.450464 containerd[1466]: time="2026-01-17T00:22:39.448599765Z" level=info msg="StartContainer for \"610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8\"" Jan 17 00:22:39.504187 systemd[1]: Started cri-containerd-610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8.scope - libcontainer container 610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8. Jan 17 00:22:39.568985 containerd[1466]: time="2026-01-17T00:22:39.567478776Z" level=info msg="StartContainer for \"610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8\" returns successfully" Jan 17 00:22:39.568385 systemd[1]: cri-containerd-610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8.scope: Deactivated successfully. Jan 17 00:22:39.592650 containerd[1466]: time="2026-01-17T00:22:39.592028467Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:39.594356 containerd[1466]: time="2026-01-17T00:22:39.594031824Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 17 00:22:39.597101 containerd[1466]: time="2026-01-17T00:22:39.597031959Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:39.603168 containerd[1466]: time="2026-01-17T00:22:39.602998317Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.082973319s" Jan 17 00:22:39.603168 containerd[1466]: time="2026-01-17T00:22:39.603117124Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 00:22:39.617312 containerd[1466]: time="2026-01-17T00:22:39.617251532Z" level=info msg="CreateContainer within sandbox \"e1bf4566f623d0fb94db15304e1e2e537159e6a991e5af798636c783eb305762\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 00:22:39.758378 containerd[1466]: time="2026-01-17T00:22:39.758257926Z" level=info msg="shim disconnected" id=610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8 namespace=k8s.io Jan 17 00:22:39.758378 containerd[1466]: time="2026-01-17T00:22:39.758359652Z" level=warning msg="cleaning up after shim disconnected" id=610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8 namespace=k8s.io Jan 17 00:22:39.758378 containerd[1466]: time="2026-01-17T00:22:39.758377542Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:39.776848 containerd[1466]: time="2026-01-17T00:22:39.776757385Z" level=info msg="CreateContainer within sandbox \"e1bf4566f623d0fb94db15304e1e2e537159e6a991e5af798636c783eb305762\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a\"" Jan 17 00:22:39.781121 containerd[1466]: time="2026-01-17T00:22:39.780151242Z" level=info msg="StartContainer for \"84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a\"" Jan 17 00:22:39.833164 systemd[1]: Started cri-containerd-84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a.scope - libcontainer container 84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a. Jan 17 00:22:39.881340 containerd[1466]: time="2026-01-17T00:22:39.881224189Z" level=info msg="StartContainer for \"84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a\" returns successfully" Jan 17 00:22:40.211315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8-rootfs.mount: Deactivated successfully. Jan 17 00:22:40.407092 containerd[1466]: time="2026-01-17T00:22:40.406855532Z" level=info msg="CreateContainer within sandbox \"7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:22:40.436336 containerd[1466]: time="2026-01-17T00:22:40.435818465Z" level=info msg="CreateContainer within sandbox \"7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b\"" Jan 17 00:22:40.440955 containerd[1466]: time="2026-01-17T00:22:40.440502546Z" level=info msg="StartContainer for \"a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b\"" Jan 17 00:22:40.442894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount277147739.mount: Deactivated successfully. Jan 17 00:22:40.544154 systemd[1]: Started cri-containerd-a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b.scope - libcontainer container a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b. Jan 17 00:22:40.689915 containerd[1466]: time="2026-01-17T00:22:40.689511218Z" level=info msg="StartContainer for \"a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b\" returns successfully" Jan 17 00:22:40.692181 systemd[1]: cri-containerd-a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b.scope: Deactivated successfully. Jan 17 00:22:40.762962 containerd[1466]: time="2026-01-17T00:22:40.762616378Z" level=info msg="shim disconnected" id=a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b namespace=k8s.io Jan 17 00:22:40.762962 containerd[1466]: time="2026-01-17T00:22:40.762907513Z" level=warning msg="cleaning up after shim disconnected" id=a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b namespace=k8s.io Jan 17 00:22:40.762962 containerd[1466]: time="2026-01-17T00:22:40.762952254Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:40.789571 kubelet[2620]: I0117 00:22:40.789321 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6ms6h" podStartSLOduration=3.797297713 podStartE2EDuration="18.789283565s" podCreationTimestamp="2026-01-17 00:22:22 +0000 UTC" firstStartedPulling="2026-01-17 00:22:24.612576161 +0000 UTC m=+8.658374843" lastFinishedPulling="2026-01-17 00:22:39.60456201 +0000 UTC m=+23.650360695" observedRunningTime="2026-01-17 00:22:40.532407032 +0000 UTC m=+24.578205740" watchObservedRunningTime="2026-01-17 00:22:40.789283565 +0000 UTC m=+24.835082274" Jan 17 00:22:41.208532 systemd[1]: run-containerd-runc-k8s.io-a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b-runc.xAJT2m.mount: Deactivated successfully. Jan 17 00:22:41.208739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b-rootfs.mount: Deactivated successfully. Jan 17 00:22:41.425234 containerd[1466]: time="2026-01-17T00:22:41.425169139Z" level=info msg="CreateContainer within sandbox \"7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:22:41.472955 containerd[1466]: time="2026-01-17T00:22:41.472547547Z" level=info msg="CreateContainer within sandbox \"7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da\"" Jan 17 00:22:41.474617 containerd[1466]: time="2026-01-17T00:22:41.474500771Z" level=info msg="StartContainer for \"781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da\"" Jan 17 00:22:41.539461 systemd[1]: Started cri-containerd-781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da.scope - libcontainer container 781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da. Jan 17 00:22:41.594390 containerd[1466]: time="2026-01-17T00:22:41.593338205Z" level=info msg="StartContainer for \"781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da\" returns successfully" Jan 17 00:22:41.851026 kubelet[2620]: I0117 00:22:41.850593 2620 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:22:41.926762 systemd[1]: Created slice kubepods-burstable-podaceec747_699b_4f02_a78c_60285853a41d.slice - libcontainer container kubepods-burstable-podaceec747_699b_4f02_a78c_60285853a41d.slice. Jan 17 00:22:41.944250 systemd[1]: Created slice kubepods-burstable-pod704a9546_01f7_4711_9e22_1bf50f032a9d.slice - libcontainer container kubepods-burstable-pod704a9546_01f7_4711_9e22_1bf50f032a9d.slice. Jan 17 00:22:42.001562 kubelet[2620]: I0117 00:22:42.001482 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7swq\" (UniqueName: \"kubernetes.io/projected/aceec747-699b-4f02-a78c-60285853a41d-kube-api-access-l7swq\") pod \"coredns-674b8bbfcf-25q7q\" (UID: \"aceec747-699b-4f02-a78c-60285853a41d\") " pod="kube-system/coredns-674b8bbfcf-25q7q" Jan 17 00:22:42.001562 kubelet[2620]: I0117 00:22:42.001568 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c4kp\" (UniqueName: \"kubernetes.io/projected/704a9546-01f7-4711-9e22-1bf50f032a9d-kube-api-access-8c4kp\") pod \"coredns-674b8bbfcf-l622x\" (UID: \"704a9546-01f7-4711-9e22-1bf50f032a9d\") " pod="kube-system/coredns-674b8bbfcf-l622x" Jan 17 00:22:42.001885 kubelet[2620]: I0117 00:22:42.001604 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aceec747-699b-4f02-a78c-60285853a41d-config-volume\") pod \"coredns-674b8bbfcf-25q7q\" (UID: \"aceec747-699b-4f02-a78c-60285853a41d\") " pod="kube-system/coredns-674b8bbfcf-25q7q" Jan 17 00:22:42.001885 kubelet[2620]: I0117 00:22:42.001633 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/704a9546-01f7-4711-9e22-1bf50f032a9d-config-volume\") pod \"coredns-674b8bbfcf-l622x\" (UID: \"704a9546-01f7-4711-9e22-1bf50f032a9d\") " pod="kube-system/coredns-674b8bbfcf-l622x" Jan 17 00:22:42.240035 containerd[1466]: time="2026-01-17T00:22:42.239965014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-25q7q,Uid:aceec747-699b-4f02-a78c-60285853a41d,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:42.253305 containerd[1466]: time="2026-01-17T00:22:42.253208251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l622x,Uid:704a9546-01f7-4711-9e22-1bf50f032a9d,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:44.347478 systemd-networkd[1346]: cilium_host: Link UP Jan 17 00:22:44.353021 systemd-networkd[1346]: cilium_net: Link UP Jan 17 00:22:44.356479 systemd-networkd[1346]: cilium_net: Gained carrier Jan 17 00:22:44.356987 systemd-networkd[1346]: cilium_host: Gained carrier Jan 17 00:22:44.388453 systemd-networkd[1346]: cilium_net: Gained IPv6LL Jan 17 00:22:44.547926 systemd-networkd[1346]: cilium_vxlan: Link UP Jan 17 00:22:44.547941 systemd-networkd[1346]: cilium_vxlan: Gained carrier Jan 17 00:22:44.881865 kernel: NET: Registered PF_ALG protocol family Jan 17 00:22:44.979294 systemd-networkd[1346]: cilium_host: Gained IPv6LL Jan 17 00:22:45.747522 systemd-networkd[1346]: cilium_vxlan: Gained IPv6LL Jan 17 00:22:45.960932 systemd-networkd[1346]: lxc_health: Link UP Jan 17 00:22:45.974141 systemd-networkd[1346]: lxc_health: Gained carrier Jan 17 00:22:46.378799 kubelet[2620]: I0117 00:22:46.378698 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m4fzp" podStartSLOduration=13.323570686 podStartE2EDuration="24.378665992s" podCreationTimestamp="2026-01-17 00:22:22 +0000 UTC" firstStartedPulling="2026-01-17 00:22:24.462638707 +0000 UTC m=+8.508437402" lastFinishedPulling="2026-01-17 00:22:35.517734025 +0000 UTC m=+19.563532708" observedRunningTime="2026-01-17 00:22:42.491842615 +0000 UTC m=+26.537641324" watchObservedRunningTime="2026-01-17 00:22:46.378665992 +0000 UTC m=+30.424464701" Jan 17 00:22:46.401110 systemd-networkd[1346]: lxc70be5f31d571: Link UP Jan 17 00:22:46.417872 kernel: eth0: renamed from tmp3e3df Jan 17 00:22:46.441494 systemd-networkd[1346]: lxc70be5f31d571: Gained carrier Jan 17 00:22:46.467810 systemd-networkd[1346]: lxca3d3348f83d3: Link UP Jan 17 00:22:46.483891 kernel: eth0: renamed from tmp2b644 Jan 17 00:22:46.498186 systemd-networkd[1346]: lxca3d3348f83d3: Gained carrier Jan 17 00:22:47.091955 systemd-networkd[1346]: lxc_health: Gained IPv6LL Jan 17 00:22:47.731378 systemd-networkd[1346]: lxca3d3348f83d3: Gained IPv6LL Jan 17 00:22:47.795230 systemd-networkd[1346]: lxc70be5f31d571: Gained IPv6LL Jan 17 00:22:50.003675 ntpd[1426]: Listen normally on 7 cilium_host 192.168.0.80:123 Jan 17 00:22:50.003853 ntpd[1426]: Listen normally on 8 cilium_net [fe80::1494:8bff:fe72:7651%4]:123 Jan 17 00:22:50.004428 ntpd[1426]: 17 Jan 00:22:50 ntpd[1426]: Listen normally on 7 cilium_host 192.168.0.80:123 Jan 17 00:22:50.004428 ntpd[1426]: 17 Jan 00:22:50 ntpd[1426]: Listen normally on 8 cilium_net [fe80::1494:8bff:fe72:7651%4]:123 Jan 17 00:22:50.004428 ntpd[1426]: 17 Jan 00:22:50 ntpd[1426]: Listen normally on 9 cilium_host [fe80::f85b:edff:feb8:9a16%5]:123 Jan 17 00:22:50.004428 ntpd[1426]: 17 Jan 00:22:50 ntpd[1426]: Listen normally on 10 cilium_vxlan [fe80::a068:c9ff:fe88:3dd%6]:123 Jan 17 00:22:50.004428 ntpd[1426]: 17 Jan 00:22:50 ntpd[1426]: Listen normally on 11 lxc_health [fe80::e83b:2dff:fead:97ec%8]:123 Jan 17 00:22:50.004428 ntpd[1426]: 17 Jan 00:22:50 ntpd[1426]: Listen normally on 12 lxc70be5f31d571 [fe80::849d:46ff:fe55:988c%10]:123 Jan 17 00:22:50.004428 ntpd[1426]: 17 Jan 00:22:50 ntpd[1426]: Listen normally on 13 lxca3d3348f83d3 [fe80::10d4:6dff:fe35:ac2b%12]:123 Jan 17 00:22:50.003946 ntpd[1426]: Listen normally on 9 cilium_host [fe80::f85b:edff:feb8:9a16%5]:123 Jan 17 00:22:50.004014 ntpd[1426]: Listen normally on 10 cilium_vxlan [fe80::a068:c9ff:fe88:3dd%6]:123 Jan 17 00:22:50.004077 ntpd[1426]: Listen normally on 11 lxc_health [fe80::e83b:2dff:fead:97ec%8]:123 Jan 17 00:22:50.004138 ntpd[1426]: Listen normally on 12 lxc70be5f31d571 [fe80::849d:46ff:fe55:988c%10]:123 Jan 17 00:22:50.004200 ntpd[1426]: Listen normally on 13 lxca3d3348f83d3 [fe80::10d4:6dff:fe35:ac2b%12]:123 Jan 17 00:22:52.718292 containerd[1466]: time="2026-01-17T00:22:52.717596164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:52.718292 containerd[1466]: time="2026-01-17T00:22:52.717691668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:52.718292 containerd[1466]: time="2026-01-17T00:22:52.717721194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:52.719965 containerd[1466]: time="2026-01-17T00:22:52.719231911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:52.757486 containerd[1466]: time="2026-01-17T00:22:52.757146681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:52.757486 containerd[1466]: time="2026-01-17T00:22:52.757267845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:52.757486 containerd[1466]: time="2026-01-17T00:22:52.757291617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:52.758104 containerd[1466]: time="2026-01-17T00:22:52.757423368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:52.794234 systemd[1]: Started cri-containerd-3e3df1b0b75c5631111809b281b59c47c5421f7b0d6c0b42806d3b0326b44ffe.scope - libcontainer container 3e3df1b0b75c5631111809b281b59c47c5421f7b0d6c0b42806d3b0326b44ffe. Jan 17 00:22:52.840352 systemd[1]: Started cri-containerd-2b644f2c11ab176b067dfd9e5a7ca1413a48d0944bd3ba14470bea1c7c6ba36e.scope - libcontainer container 2b644f2c11ab176b067dfd9e5a7ca1413a48d0944bd3ba14470bea1c7c6ba36e. Jan 17 00:22:52.965258 containerd[1466]: time="2026-01-17T00:22:52.965126007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-25q7q,Uid:aceec747-699b-4f02-a78c-60285853a41d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e3df1b0b75c5631111809b281b59c47c5421f7b0d6c0b42806d3b0326b44ffe\"" Jan 17 00:22:52.993643 containerd[1466]: time="2026-01-17T00:22:52.992208951Z" level=info msg="CreateContainer within sandbox \"3e3df1b0b75c5631111809b281b59c47c5421f7b0d6c0b42806d3b0326b44ffe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:22:53.029076 containerd[1466]: time="2026-01-17T00:22:53.028949575Z" level=info msg="CreateContainer within sandbox \"3e3df1b0b75c5631111809b281b59c47c5421f7b0d6c0b42806d3b0326b44ffe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d3559e6be5a5ef07fec891e7419fe7f47baf6e9129a9ea369f5699dcb8a9fbbb\"" Jan 17 00:22:53.029749 containerd[1466]: time="2026-01-17T00:22:53.029605416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l622x,Uid:704a9546-01f7-4711-9e22-1bf50f032a9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b644f2c11ab176b067dfd9e5a7ca1413a48d0944bd3ba14470bea1c7c6ba36e\"" Jan 17 00:22:53.031558 containerd[1466]: time="2026-01-17T00:22:53.031494836Z" level=info msg="StartContainer for \"d3559e6be5a5ef07fec891e7419fe7f47baf6e9129a9ea369f5699dcb8a9fbbb\"" Jan 17 00:22:53.047093 containerd[1466]: time="2026-01-17T00:22:53.046655096Z" level=info msg="CreateContainer within sandbox \"2b644f2c11ab176b067dfd9e5a7ca1413a48d0944bd3ba14470bea1c7c6ba36e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:22:53.090611 containerd[1466]: time="2026-01-17T00:22:53.090521803Z" level=info msg="CreateContainer within sandbox \"2b644f2c11ab176b067dfd9e5a7ca1413a48d0944bd3ba14470bea1c7c6ba36e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9ddda00482a89215bd045485a286f701b3e9d498bffdca8f819eed040fc4548c\"" Jan 17 00:22:53.093300 containerd[1466]: time="2026-01-17T00:22:53.093237813Z" level=info msg="StartContainer for \"9ddda00482a89215bd045485a286f701b3e9d498bffdca8f819eed040fc4548c\"" Jan 17 00:22:53.124656 systemd[1]: Started cri-containerd-d3559e6be5a5ef07fec891e7419fe7f47baf6e9129a9ea369f5699dcb8a9fbbb.scope - libcontainer container d3559e6be5a5ef07fec891e7419fe7f47baf6e9129a9ea369f5699dcb8a9fbbb. Jan 17 00:22:53.168208 systemd[1]: Started cri-containerd-9ddda00482a89215bd045485a286f701b3e9d498bffdca8f819eed040fc4548c.scope - libcontainer container 9ddda00482a89215bd045485a286f701b3e9d498bffdca8f819eed040fc4548c. Jan 17 00:22:53.221155 containerd[1466]: time="2026-01-17T00:22:53.220911464Z" level=info msg="StartContainer for \"d3559e6be5a5ef07fec891e7419fe7f47baf6e9129a9ea369f5699dcb8a9fbbb\" returns successfully" Jan 17 00:22:53.242784 containerd[1466]: time="2026-01-17T00:22:53.242716336Z" level=info msg="StartContainer for \"9ddda00482a89215bd045485a286f701b3e9d498bffdca8f819eed040fc4548c\" returns successfully" Jan 17 00:22:53.511882 kubelet[2620]: I0117 00:22:53.510184 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-l622x" podStartSLOduration=31.51015147 podStartE2EDuration="31.51015147s" podCreationTimestamp="2026-01-17 00:22:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:53.481693982 +0000 UTC m=+37.527492689" watchObservedRunningTime="2026-01-17 00:22:53.51015147 +0000 UTC m=+37.555950179" Jan 17 00:22:53.543685 kubelet[2620]: I0117 00:22:53.543589 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-25q7q" podStartSLOduration=31.543552684 podStartE2EDuration="31.543552684s" podCreationTimestamp="2026-01-17 00:22:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:53.511995414 +0000 UTC m=+37.557794147" watchObservedRunningTime="2026-01-17 00:22:53.543552684 +0000 UTC m=+37.589351391" Jan 17 00:23:14.241581 systemd[1]: Started sshd@9-10.128.0.88:22-4.153.228.146:34944.service - OpenSSH per-connection server daemon (4.153.228.146:34944). Jan 17 00:23:14.467898 sshd[3997]: Accepted publickey for core from 4.153.228.146 port 34944 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:23:14.470223 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:14.478894 systemd-logind[1438]: New session 10 of user core. Jan 17 00:23:14.485195 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:23:14.767697 sshd[3997]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:14.773199 systemd[1]: sshd@9-10.128.0.88:22-4.153.228.146:34944.service: Deactivated successfully. Jan 17 00:23:14.776891 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:23:14.779934 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:23:14.782374 systemd-logind[1438]: Removed session 10. Jan 17 00:23:19.814374 systemd[1]: Started sshd@10-10.128.0.88:22-4.153.228.146:37712.service - OpenSSH per-connection server daemon (4.153.228.146:37712). Jan 17 00:23:20.031941 sshd[4013]: Accepted publickey for core from 4.153.228.146 port 37712 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:23:20.034095 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:20.042598 systemd-logind[1438]: New session 11 of user core. Jan 17 00:23:20.049229 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:23:20.315323 sshd[4013]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:20.322142 systemd[1]: sshd@10-10.128.0.88:22-4.153.228.146:37712.service: Deactivated successfully. Jan 17 00:23:20.325403 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:23:20.326869 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:23:20.329085 systemd-logind[1438]: Removed session 11. Jan 17 00:23:25.367339 systemd[1]: Started sshd@11-10.128.0.88:22-4.153.228.146:59156.service - OpenSSH per-connection server daemon (4.153.228.146:59156). Jan 17 00:23:25.591147 sshd[4029]: Accepted publickey for core from 4.153.228.146 port 59156 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:23:25.593499 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:25.600380 systemd-logind[1438]: New session 12 of user core. Jan 17 00:23:25.608176 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:23:25.847348 sshd[4029]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:25.852993 systemd[1]: sshd@11-10.128.0.88:22-4.153.228.146:59156.service: Deactivated successfully. Jan 17 00:23:25.856938 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:23:25.858693 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:23:25.861629 systemd-logind[1438]: Removed session 12. Jan 17 00:23:30.897409 systemd[1]: Started sshd@12-10.128.0.88:22-4.153.228.146:59172.service - OpenSSH per-connection server daemon (4.153.228.146:59172). Jan 17 00:23:31.155609 sshd[4043]: Accepted publickey for core from 4.153.228.146 port 59172 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:23:31.156526 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:31.165662 systemd-logind[1438]: New session 13 of user core. Jan 17 00:23:31.171057 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:23:31.421765 sshd[4043]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:31.427531 systemd[1]: sshd@12-10.128.0.88:22-4.153.228.146:59172.service: Deactivated successfully. Jan 17 00:23:31.431325 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:23:31.434719 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:23:31.436488 systemd-logind[1438]: Removed session 13. Jan 17 00:23:36.473713 systemd[1]: Started sshd@13-10.128.0.88:22-4.153.228.146:47034.service - OpenSSH per-connection server daemon (4.153.228.146:47034). Jan 17 00:23:36.701899 sshd[4058]: Accepted publickey for core from 4.153.228.146 port 47034 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:23:36.704643 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:36.714909 systemd-logind[1438]: New session 14 of user core. Jan 17 00:23:36.719232 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:23:36.963009 sshd[4058]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:36.969117 systemd[1]: sshd@13-10.128.0.88:22-4.153.228.146:47034.service: Deactivated successfully. Jan 17 00:23:36.973208 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:23:36.974484 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:23:36.977000 systemd-logind[1438]: Removed session 14. Jan 17 00:23:37.013058 systemd[1]: Started sshd@14-10.128.0.88:22-4.153.228.146:47038.service - OpenSSH per-connection server daemon (4.153.228.146:47038). Jan 17 00:23:37.266203 sshd[4072]: Accepted publickey for core from 4.153.228.146 port 47038 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:23:37.268650 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:37.277537 systemd-logind[1438]: New session 15 of user core. Jan 17 00:23:37.285550 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:23:37.621522 sshd[4072]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:37.631268 systemd[1]: sshd@14-10.128.0.88:22-4.153.228.146:47038.service: Deactivated successfully. Jan 17 00:23:37.637119 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:23:37.643350 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:23:37.646164 systemd-logind[1438]: Removed session 15. Jan 17 00:23:37.675457 systemd[1]: Started sshd@15-10.128.0.88:22-4.153.228.146:47050.service - OpenSSH per-connection server daemon (4.153.228.146:47050). Jan 17 00:23:37.935391 sshd[4083]: Accepted publickey for core from 4.153.228.146 port 47050 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:23:37.938373 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:37.945955 systemd-logind[1438]: New session 16 of user core. Jan 17 00:23:37.953558 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:23:38.249254 sshd[4083]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:38.255600 systemd[1]: sshd@15-10.128.0.88:22-4.153.228.146:47050.service: Deactivated successfully. Jan 17 00:23:38.266921 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:23:38.272734 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:23:38.276011 systemd-logind[1438]: Removed session 16. Jan 17 00:23:43.298403 systemd[1]: Started sshd@16-10.128.0.88:22-4.153.228.146:47058.service - OpenSSH per-connection server daemon (4.153.228.146:47058). Jan 17 00:23:43.524725 sshd[4097]: Accepted publickey for core from 4.153.228.146 port 47058 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:23:43.526978 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:43.535774 systemd-logind[1438]: New session 17 of user core. Jan 17 00:23:43.538282 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:23:43.794154 sshd[4097]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:43.801169 systemd[1]: sshd@16-10.128.0.88:22-4.153.228.146:47058.service: Deactivated successfully. Jan 17 00:23:43.807192 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:23:43.811186 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:23:43.814020 systemd-logind[1438]: Removed session 17. Jan 17 00:23:48.848007 systemd[1]: Started sshd@17-10.128.0.88:22-4.153.228.146:34252.service - OpenSSH per-connection server daemon (4.153.228.146:34252). Jan 17 00:23:49.067318 sshd[4110]: Accepted publickey for core from 4.153.228.146 port 34252 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:23:49.069550 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:49.075998 systemd-logind[1438]: New session 18 of user core. Jan 17 00:23:49.084276 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:23:49.326242 sshd[4110]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:49.333009 systemd[1]: sshd@17-10.128.0.88:22-4.153.228.146:34252.service: Deactivated successfully. Jan 17 00:23:49.336248 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:23:49.337519 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:23:49.339213 systemd-logind[1438]: Removed session 18. Jan 17 00:23:54.380210 systemd[1]: Started sshd@18-10.128.0.88:22-4.153.228.146:34254.service - OpenSSH per-connection server daemon (4.153.228.146:34254). Jan 17 00:23:54.613750 sshd[4125]: Accepted publickey for core from 4.153.228.146 port 34254 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:23:54.615888 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:54.624923 systemd-logind[1438]: New session 19 of user core. Jan 17 00:23:54.629982 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:23:54.874491 sshd[4125]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:54.879851 systemd[1]: sshd@18-10.128.0.88:22-4.153.228.146:34254.service: Deactivated successfully. Jan 17 00:23:54.884417 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:23:54.887587 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:23:54.889785 systemd-logind[1438]: Removed session 19. Jan 17 00:23:54.925448 systemd[1]: Started sshd@19-10.128.0.88:22-4.153.228.146:35518.service - OpenSSH per-connection server daemon (4.153.228.146:35518). Jan 17 00:23:55.144278 sshd[4138]: Accepted publickey for core from 4.153.228.146 port 35518 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:23:55.147979 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:55.158126 systemd-logind[1438]: New session 20 of user core. Jan 17 00:23:55.166263 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:23:55.464124 sshd[4138]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:55.469636 systemd[1]: sshd@19-10.128.0.88:22-4.153.228.146:35518.service: Deactivated successfully. Jan 17 00:23:55.475133 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:23:55.477679 systemd-logind[1438]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:23:55.480205 systemd-logind[1438]: Removed session 20. Jan 17 00:23:55.509420 systemd[1]: Started sshd@20-10.128.0.88:22-4.153.228.146:35530.service - OpenSSH per-connection server daemon (4.153.228.146:35530). Jan 17 00:23:55.729099 sshd[4149]: Accepted publickey for core from 4.153.228.146 port 35530 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:23:55.731061 sshd[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:55.738002 systemd-logind[1438]: New session 21 of user core. Jan 17 00:23:55.746173 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:23:56.664244 sshd[4149]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:56.677481 systemd[1]: sshd@20-10.128.0.88:22-4.153.228.146:35530.service: Deactivated successfully. Jan 17 00:23:56.685237 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:23:56.689294 systemd-logind[1438]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:23:56.710396 systemd[1]: Started sshd@21-10.128.0.88:22-4.153.228.146:35536.service - OpenSSH per-connection server daemon (4.153.228.146:35536). Jan 17 00:23:56.713525 systemd-logind[1438]: Removed session 21. Jan 17 00:23:56.951369 sshd[4167]: Accepted publickey for core from 4.153.228.146 port 35536 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:23:56.953509 sshd[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:56.960882 systemd-logind[1438]: New session 22 of user core. Jan 17 00:23:56.971268 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:23:57.393100 sshd[4167]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:57.400938 systemd-logind[1438]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:23:57.401815 systemd[1]: sshd@21-10.128.0.88:22-4.153.228.146:35536.service: Deactivated successfully. Jan 17 00:23:57.405756 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:23:57.407696 systemd-logind[1438]: Removed session 22. Jan 17 00:23:57.440432 systemd[1]: Started sshd@22-10.128.0.88:22-4.153.228.146:35544.service - OpenSSH per-connection server daemon (4.153.228.146:35544). Jan 17 00:23:57.659731 sshd[4178]: Accepted publickey for core from 4.153.228.146 port 35544 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:23:57.661658 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:57.668012 systemd-logind[1438]: New session 23 of user core. Jan 17 00:23:57.676225 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:23:57.910194 sshd[4178]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:57.916203 systemd[1]: sshd@22-10.128.0.88:22-4.153.228.146:35544.service: Deactivated successfully. Jan 17 00:23:57.919550 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:23:57.922916 systemd-logind[1438]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:23:57.925450 systemd-logind[1438]: Removed session 23. Jan 17 00:24:02.953498 systemd[1]: Started sshd@23-10.128.0.88:22-4.153.228.146:35558.service - OpenSSH per-connection server daemon (4.153.228.146:35558). Jan 17 00:24:03.187654 sshd[4191]: Accepted publickey for core from 4.153.228.146 port 35558 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:24:03.188635 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:03.197516 systemd-logind[1438]: New session 24 of user core. Jan 17 00:24:03.203234 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:24:03.435591 sshd[4191]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:03.440917 systemd[1]: sshd@23-10.128.0.88:22-4.153.228.146:35558.service: Deactivated successfully. Jan 17 00:24:03.445328 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:24:03.448082 systemd-logind[1438]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:24:03.450571 systemd-logind[1438]: Removed session 24. Jan 17 00:24:08.483401 systemd[1]: Started sshd@24-10.128.0.88:22-4.153.228.146:48394.service - OpenSSH per-connection server daemon (4.153.228.146:48394). Jan 17 00:24:08.721939 sshd[4208]: Accepted publickey for core from 4.153.228.146 port 48394 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:24:08.724198 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:08.731781 systemd-logind[1438]: New session 25 of user core. Jan 17 00:24:08.738306 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:24:08.981280 sshd[4208]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:08.986026 systemd[1]: sshd@24-10.128.0.88:22-4.153.228.146:48394.service: Deactivated successfully. Jan 17 00:24:08.989498 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:24:08.993611 systemd-logind[1438]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:24:08.995638 systemd-logind[1438]: Removed session 25. Jan 17 00:24:14.028407 systemd[1]: Started sshd@25-10.128.0.88:22-4.153.228.146:48402.service - OpenSSH per-connection server daemon (4.153.228.146:48402). Jan 17 00:24:14.256193 sshd[4220]: Accepted publickey for core from 4.153.228.146 port 48402 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:24:14.258428 sshd[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:14.266790 systemd-logind[1438]: New session 26 of user core. Jan 17 00:24:14.272845 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:24:14.513294 sshd[4220]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:14.521459 systemd[1]: sshd@25-10.128.0.88:22-4.153.228.146:48402.service: Deactivated successfully. Jan 17 00:24:14.524947 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:24:14.527016 systemd-logind[1438]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:24:14.529240 systemd-logind[1438]: Removed session 26. Jan 17 00:24:14.567631 systemd[1]: Started sshd@26-10.128.0.88:22-4.153.228.146:51198.service - OpenSSH per-connection server daemon (4.153.228.146:51198). Jan 17 00:24:14.816973 sshd[4236]: Accepted publickey for core from 4.153.228.146 port 51198 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:24:14.819404 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:14.827272 systemd-logind[1438]: New session 27 of user core. Jan 17 00:24:14.832200 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:24:16.790956 containerd[1466]: time="2026-01-17T00:24:16.787180757Z" level=info msg="StopContainer for \"84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a\" with timeout 30 (s)" Jan 17 00:24:16.793294 containerd[1466]: time="2026-01-17T00:24:16.791035897Z" level=info msg="Stop container \"84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a\" with signal terminated" Jan 17 00:24:16.800081 systemd[1]: run-containerd-runc-k8s.io-781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da-runc.IUC7ij.mount: Deactivated successfully. Jan 17 00:24:16.829187 systemd[1]: cri-containerd-84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a.scope: Deactivated successfully. Jan 17 00:24:16.841124 containerd[1466]: time="2026-01-17T00:24:16.840727845Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:24:16.868073 containerd[1466]: time="2026-01-17T00:24:16.868010696Z" level=info msg="StopContainer for \"781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da\" with timeout 2 (s)" Jan 17 00:24:16.868712 containerd[1466]: time="2026-01-17T00:24:16.868645790Z" level=info msg="Stop container \"781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da\" with signal terminated" Jan 17 00:24:16.904561 systemd-networkd[1346]: lxc_health: Link DOWN Jan 17 00:24:16.904576 systemd-networkd[1346]: lxc_health: Lost carrier Jan 17 00:24:16.922382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a-rootfs.mount: Deactivated successfully. Jan 17 00:24:16.936093 systemd[1]: cri-containerd-781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da.scope: Deactivated successfully. Jan 17 00:24:16.936874 systemd[1]: cri-containerd-781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da.scope: Consumed 11.363s CPU time. Jan 17 00:24:16.956364 containerd[1466]: time="2026-01-17T00:24:16.956032660Z" level=info msg="shim disconnected" id=84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a namespace=k8s.io Jan 17 00:24:16.956364 containerd[1466]: time="2026-01-17T00:24:16.956171967Z" level=warning msg="cleaning up after shim disconnected" id=84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a namespace=k8s.io Jan 17 00:24:16.956364 containerd[1466]: time="2026-01-17T00:24:16.956192723Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:24:16.991270 containerd[1466]: time="2026-01-17T00:24:16.991189735Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:24:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:24:16.997658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da-rootfs.mount: Deactivated successfully. Jan 17 00:24:17.000260 containerd[1466]: time="2026-01-17T00:24:16.999427148Z" level=info msg="StopContainer for \"84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a\" returns successfully" Jan 17 00:24:17.004115 containerd[1466]: time="2026-01-17T00:24:17.002441934Z" level=info msg="StopPodSandbox for \"e1bf4566f623d0fb94db15304e1e2e537159e6a991e5af798636c783eb305762\"" Jan 17 00:24:17.004115 containerd[1466]: time="2026-01-17T00:24:17.002520743Z" level=info msg="Container to stop \"84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:24:17.007098 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1bf4566f623d0fb94db15304e1e2e537159e6a991e5af798636c783eb305762-shm.mount: Deactivated successfully. Jan 17 00:24:17.014674 containerd[1466]: time="2026-01-17T00:24:17.014313887Z" level=info msg="shim disconnected" id=781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da namespace=k8s.io Jan 17 00:24:17.014674 containerd[1466]: time="2026-01-17T00:24:17.014394981Z" level=warning msg="cleaning up after shim disconnected" id=781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da namespace=k8s.io Jan 17 00:24:17.014674 containerd[1466]: time="2026-01-17T00:24:17.014413429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:24:17.025410 systemd[1]: cri-containerd-e1bf4566f623d0fb94db15304e1e2e537159e6a991e5af798636c783eb305762.scope: Deactivated successfully. Jan 17 00:24:17.054033 containerd[1466]: time="2026-01-17T00:24:17.053359981Z" level=info msg="StopContainer for \"781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da\" returns successfully" Jan 17 00:24:17.054773 containerd[1466]: time="2026-01-17T00:24:17.054466680Z" level=info msg="StopPodSandbox for \"7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0\"" Jan 17 00:24:17.054773 containerd[1466]: time="2026-01-17T00:24:17.054533499Z" level=info msg="Container to stop \"a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:24:17.054773 containerd[1466]: time="2026-01-17T00:24:17.054555808Z" level=info msg="Container to stop \"a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:24:17.054773 containerd[1466]: time="2026-01-17T00:24:17.054573493Z" level=info msg="Container to stop \"d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:24:17.054773 containerd[1466]: time="2026-01-17T00:24:17.054590934Z" level=info msg="Container to stop \"610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:24:17.054773 containerd[1466]: time="2026-01-17T00:24:17.054610874Z" level=info msg="Container to stop \"781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:24:17.074608 systemd[1]: cri-containerd-7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0.scope: Deactivated successfully. Jan 17 00:24:17.094072 containerd[1466]: time="2026-01-17T00:24:17.093950696Z" level=info msg="shim disconnected" id=e1bf4566f623d0fb94db15304e1e2e537159e6a991e5af798636c783eb305762 namespace=k8s.io Jan 17 00:24:17.094072 containerd[1466]: time="2026-01-17T00:24:17.094038430Z" level=warning msg="cleaning up after shim disconnected" id=e1bf4566f623d0fb94db15304e1e2e537159e6a991e5af798636c783eb305762 namespace=k8s.io Jan 17 00:24:17.094072 containerd[1466]: time="2026-01-17T00:24:17.094057681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:24:17.130811 containerd[1466]: time="2026-01-17T00:24:17.130668870Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:24:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:24:17.133447 containerd[1466]: time="2026-01-17T00:24:17.133370071Z" level=info msg="TearDown network for sandbox \"e1bf4566f623d0fb94db15304e1e2e537159e6a991e5af798636c783eb305762\" successfully" Jan 17 00:24:17.133447 containerd[1466]: time="2026-01-17T00:24:17.133483047Z" level=info msg="StopPodSandbox for \"e1bf4566f623d0fb94db15304e1e2e537159e6a991e5af798636c783eb305762\" returns successfully" Jan 17 00:24:17.135806 containerd[1466]: time="2026-01-17T00:24:17.134771730Z" level=info msg="shim disconnected" id=7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0 namespace=k8s.io Jan 17 00:24:17.135806 containerd[1466]: time="2026-01-17T00:24:17.135514889Z" level=warning msg="cleaning up after shim disconnected" id=7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0 namespace=k8s.io Jan 17 00:24:17.135806 containerd[1466]: time="2026-01-17T00:24:17.135554822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:24:17.177914 containerd[1466]: time="2026-01-17T00:24:17.177792021Z" level=info msg="TearDown network for sandbox \"7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0\" successfully" Jan 17 00:24:17.177914 containerd[1466]: time="2026-01-17T00:24:17.177908878Z" level=info msg="StopPodSandbox for \"7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0\" returns successfully" Jan 17 00:24:17.237295 kubelet[2620]: I0117 00:24:17.236654 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw2nr\" (UniqueName: \"kubernetes.io/projected/6a30224e-4681-468f-a56a-be1f49fb04e1-kube-api-access-rw2nr\") pod \"6a30224e-4681-468f-a56a-be1f49fb04e1\" (UID: \"6a30224e-4681-468f-a56a-be1f49fb04e1\") " Jan 17 00:24:17.237295 kubelet[2620]: I0117 00:24:17.236744 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a30224e-4681-468f-a56a-be1f49fb04e1-cilium-config-path\") pod \"6a30224e-4681-468f-a56a-be1f49fb04e1\" (UID: \"6a30224e-4681-468f-a56a-be1f49fb04e1\") " Jan 17 00:24:17.240689 kubelet[2620]: I0117 00:24:17.240538 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a30224e-4681-468f-a56a-be1f49fb04e1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6a30224e-4681-468f-a56a-be1f49fb04e1" (UID: "6a30224e-4681-468f-a56a-be1f49fb04e1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:24:17.242680 kubelet[2620]: I0117 00:24:17.242590 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a30224e-4681-468f-a56a-be1f49fb04e1-kube-api-access-rw2nr" (OuterVolumeSpecName: "kube-api-access-rw2nr") pod "6a30224e-4681-468f-a56a-be1f49fb04e1" (UID: "6a30224e-4681-468f-a56a-be1f49fb04e1"). InnerVolumeSpecName "kube-api-access-rw2nr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:24:17.339798 kubelet[2620]: I0117 00:24:17.337091 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-bpf-maps\") pod \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " Jan 17 00:24:17.339798 kubelet[2620]: I0117 00:24:17.337166 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-host-proc-sys-kernel\") pod \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " Jan 17 00:24:17.339798 kubelet[2620]: I0117 00:24:17.337203 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cilium-config-path\") pod \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " Jan 17 00:24:17.339798 kubelet[2620]: I0117 00:24:17.337227 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-host-proc-sys-net\") pod \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " Jan 17 00:24:17.339798 kubelet[2620]: I0117 00:24:17.337257 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-hubble-tls\") pod \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " Jan 17 00:24:17.339798 kubelet[2620]: I0117 00:24:17.337261 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "29f22310-78a2-4dd2-9ed3-b7cbecd2a977" (UID: "29f22310-78a2-4dd2-9ed3-b7cbecd2a977"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:24:17.340303 kubelet[2620]: I0117 00:24:17.337283 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-etc-cni-netd\") pod \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " Jan 17 00:24:17.340303 kubelet[2620]: I0117 00:24:17.337309 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cni-path\") pod \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " Jan 17 00:24:17.340303 kubelet[2620]: I0117 00:24:17.337321 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "29f22310-78a2-4dd2-9ed3-b7cbecd2a977" (UID: "29f22310-78a2-4dd2-9ed3-b7cbecd2a977"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:24:17.340303 kubelet[2620]: I0117 00:24:17.337336 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-xtables-lock\") pod \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " Jan 17 00:24:17.340303 kubelet[2620]: I0117 00:24:17.337350 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "29f22310-78a2-4dd2-9ed3-b7cbecd2a977" (UID: "29f22310-78a2-4dd2-9ed3-b7cbecd2a977"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:24:17.340597 kubelet[2620]: I0117 00:24:17.337365 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cilium-cgroup\") pod \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " Jan 17 00:24:17.340597 kubelet[2620]: I0117 00:24:17.337402 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "29f22310-78a2-4dd2-9ed3-b7cbecd2a977" (UID: "29f22310-78a2-4dd2-9ed3-b7cbecd2a977"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:24:17.341248 kubelet[2620]: I0117 00:24:17.341189 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "29f22310-78a2-4dd2-9ed3-b7cbecd2a977" (UID: "29f22310-78a2-4dd2-9ed3-b7cbecd2a977"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:24:17.341413 kubelet[2620]: I0117 00:24:17.341278 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cni-path" (OuterVolumeSpecName: "cni-path") pod "29f22310-78a2-4dd2-9ed3-b7cbecd2a977" (UID: "29f22310-78a2-4dd2-9ed3-b7cbecd2a977"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:24:17.341413 kubelet[2620]: I0117 00:24:17.341306 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "29f22310-78a2-4dd2-9ed3-b7cbecd2a977" (UID: "29f22310-78a2-4dd2-9ed3-b7cbecd2a977"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:24:17.341413 kubelet[2620]: I0117 00:24:17.341361 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cilium-run\") pod \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " Jan 17 00:24:17.341413 kubelet[2620]: I0117 00:24:17.341399 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-hostproc\") pod \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " Jan 17 00:24:17.341630 kubelet[2620]: I0117 00:24:17.341433 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-clustermesh-secrets\") pod \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " Jan 17 00:24:17.341630 kubelet[2620]: I0117 00:24:17.341471 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ndq8\" (UniqueName: \"kubernetes.io/projected/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-kube-api-access-4ndq8\") pod \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " Jan 17 00:24:17.341630 kubelet[2620]: I0117 00:24:17.341503 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-lib-modules\") pod \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\" (UID: \"29f22310-78a2-4dd2-9ed3-b7cbecd2a977\") " Jan 17 00:24:17.341630 kubelet[2620]: I0117 00:24:17.341574 2620 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rw2nr\" (UniqueName: \"kubernetes.io/projected/6a30224e-4681-468f-a56a-be1f49fb04e1-kube-api-access-rw2nr\") on node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" DevicePath \"\"" Jan 17 00:24:17.341630 kubelet[2620]: I0117 00:24:17.341608 2620 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-bpf-maps\") on node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" DevicePath \"\"" Jan 17 00:24:17.341984 kubelet[2620]: I0117 00:24:17.341629 2620 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-host-proc-sys-kernel\") on node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" DevicePath \"\"" Jan 17 00:24:17.341984 kubelet[2620]: I0117 00:24:17.341648 2620 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-host-proc-sys-net\") on node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" DevicePath \"\"" Jan 17 00:24:17.341984 kubelet[2620]: I0117 00:24:17.341666 2620 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-etc-cni-netd\") on node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" DevicePath \"\"" Jan 17 00:24:17.341984 kubelet[2620]: I0117 00:24:17.341683 2620 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cni-path\") on node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" DevicePath \"\"" Jan 17 00:24:17.341984 kubelet[2620]: I0117 00:24:17.341700 2620 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-xtables-lock\") on node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" DevicePath \"\"" Jan 17 00:24:17.341984 kubelet[2620]: I0117 00:24:17.341719 2620 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a30224e-4681-468f-a56a-be1f49fb04e1-cilium-config-path\") on node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" DevicePath \"\"" Jan 17 00:24:17.341984 kubelet[2620]: I0117 00:24:17.341736 2620 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cilium-cgroup\") on node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" DevicePath \"\"" Jan 17 00:24:17.342371 kubelet[2620]: I0117 00:24:17.341776 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "29f22310-78a2-4dd2-9ed3-b7cbecd2a977" (UID: "29f22310-78a2-4dd2-9ed3-b7cbecd2a977"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:24:17.342371 kubelet[2620]: I0117 00:24:17.341805 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "29f22310-78a2-4dd2-9ed3-b7cbecd2a977" (UID: "29f22310-78a2-4dd2-9ed3-b7cbecd2a977"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:24:17.343174 kubelet[2620]: I0117 00:24:17.341850 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-hostproc" (OuterVolumeSpecName: "hostproc") pod "29f22310-78a2-4dd2-9ed3-b7cbecd2a977" (UID: "29f22310-78a2-4dd2-9ed3-b7cbecd2a977"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:24:17.345236 kubelet[2620]: I0117 00:24:17.345164 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "29f22310-78a2-4dd2-9ed3-b7cbecd2a977" (UID: "29f22310-78a2-4dd2-9ed3-b7cbecd2a977"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:24:17.349174 kubelet[2620]: I0117 00:24:17.349026 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "29f22310-78a2-4dd2-9ed3-b7cbecd2a977" (UID: "29f22310-78a2-4dd2-9ed3-b7cbecd2a977"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:24:17.349379 kubelet[2620]: I0117 00:24:17.349261 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "29f22310-78a2-4dd2-9ed3-b7cbecd2a977" (UID: "29f22310-78a2-4dd2-9ed3-b7cbecd2a977"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:24:17.351357 kubelet[2620]: I0117 00:24:17.351289 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-kube-api-access-4ndq8" (OuterVolumeSpecName: "kube-api-access-4ndq8") pod "29f22310-78a2-4dd2-9ed3-b7cbecd2a977" (UID: "29f22310-78a2-4dd2-9ed3-b7cbecd2a977"). InnerVolumeSpecName "kube-api-access-4ndq8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:24:17.442746 kubelet[2620]: I0117 00:24:17.442686 2620 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cilium-run\") on node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" DevicePath \"\"" Jan 17 00:24:17.442746 kubelet[2620]: I0117 00:24:17.442743 2620 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-hostproc\") on node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" DevicePath \"\"" Jan 17 00:24:17.442746 kubelet[2620]: I0117 00:24:17.442762 2620 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-clustermesh-secrets\") on node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" DevicePath \"\"" Jan 17 00:24:17.443108 kubelet[2620]: I0117 00:24:17.442782 2620 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4ndq8\" (UniqueName: \"kubernetes.io/projected/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-kube-api-access-4ndq8\") on node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" DevicePath \"\"" Jan 17 00:24:17.443108 kubelet[2620]: I0117 00:24:17.442798 2620 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-lib-modules\") on node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" DevicePath \"\"" Jan 17 00:24:17.443108 kubelet[2620]: I0117 00:24:17.442901 2620 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-cilium-config-path\") on node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" DevicePath \"\"" Jan 17 00:24:17.443108 kubelet[2620]: I0117 00:24:17.442922 2620 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29f22310-78a2-4dd2-9ed3-b7cbecd2a977-hubble-tls\") on node \"ci-4081-3-6-nightly-20260116-2100-8973db2e92dca7ac607a\" DevicePath \"\"" Jan 17 00:24:17.692866 kubelet[2620]: I0117 00:24:17.690675 2620 scope.go:117] "RemoveContainer" containerID="84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a" Jan 17 00:24:17.702800 systemd[1]: Removed slice kubepods-besteffort-pod6a30224e_4681_468f_a56a_be1f49fb04e1.slice - libcontainer container kubepods-besteffort-pod6a30224e_4681_468f_a56a_be1f49fb04e1.slice. Jan 17 00:24:17.703689 containerd[1466]: time="2026-01-17T00:24:17.703057797Z" level=info msg="RemoveContainer for \"84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a\"" Jan 17 00:24:17.716976 containerd[1466]: time="2026-01-17T00:24:17.715644501Z" level=info msg="RemoveContainer for \"84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a\" returns successfully" Jan 17 00:24:17.717468 kubelet[2620]: I0117 00:24:17.717413 2620 scope.go:117] "RemoveContainer" containerID="84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a" Jan 17 00:24:17.720103 containerd[1466]: time="2026-01-17T00:24:17.720005055Z" level=error msg="ContainerStatus for \"84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a\": not found" Jan 17 00:24:17.720393 kubelet[2620]: E0117 00:24:17.720328 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a\": not found" containerID="84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a" Jan 17 00:24:17.720480 kubelet[2620]: I0117 00:24:17.720383 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a"} err="failed to get container status \"84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a\": rpc error: code = NotFound desc = an error occurred when try to find container \"84a68ce97079bfaedef729bb8099bd30fd140779d93c41b7d96d34a6f5098b1a\": not found" Jan 17 00:24:17.720535 kubelet[2620]: I0117 00:24:17.720469 2620 scope.go:117] "RemoveContainer" containerID="781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da" Jan 17 00:24:17.725139 containerd[1466]: time="2026-01-17T00:24:17.725070140Z" level=info msg="RemoveContainer for \"781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da\"" Jan 17 00:24:17.727869 systemd[1]: Removed slice kubepods-burstable-pod29f22310_78a2_4dd2_9ed3_b7cbecd2a977.slice - libcontainer container kubepods-burstable-pod29f22310_78a2_4dd2_9ed3_b7cbecd2a977.slice. Jan 17 00:24:17.728326 systemd[1]: kubepods-burstable-pod29f22310_78a2_4dd2_9ed3_b7cbecd2a977.slice: Consumed 11.507s CPU time. Jan 17 00:24:17.733890 containerd[1466]: time="2026-01-17T00:24:17.732742983Z" level=info msg="RemoveContainer for \"781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da\" returns successfully" Jan 17 00:24:17.734428 kubelet[2620]: I0117 00:24:17.734366 2620 scope.go:117] "RemoveContainer" containerID="a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b" Jan 17 00:24:17.740793 containerd[1466]: time="2026-01-17T00:24:17.740326499Z" level=info msg="RemoveContainer for \"a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b\"" Jan 17 00:24:17.748071 containerd[1466]: time="2026-01-17T00:24:17.748018948Z" level=info msg="RemoveContainer for \"a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b\" returns successfully" Jan 17 00:24:17.748739 kubelet[2620]: I0117 00:24:17.748690 2620 scope.go:117] "RemoveContainer" containerID="610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8" Jan 17 00:24:17.751640 containerd[1466]: time="2026-01-17T00:24:17.751538464Z" level=info msg="RemoveContainer for \"610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8\"" Jan 17 00:24:17.759510 containerd[1466]: time="2026-01-17T00:24:17.758427623Z" level=info msg="RemoveContainer for \"610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8\" returns successfully" Jan 17 00:24:17.760241 kubelet[2620]: I0117 00:24:17.758957 2620 scope.go:117] "RemoveContainer" containerID="d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043" Jan 17 00:24:17.763460 containerd[1466]: time="2026-01-17T00:24:17.763193966Z" level=info msg="RemoveContainer for \"d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043\"" Jan 17 00:24:17.768160 containerd[1466]: time="2026-01-17T00:24:17.768077019Z" level=info msg="RemoveContainer for \"d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043\" returns successfully" Jan 17 00:24:17.768504 kubelet[2620]: I0117 00:24:17.768466 2620 scope.go:117] "RemoveContainer" containerID="a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372" Jan 17 00:24:17.770751 containerd[1466]: time="2026-01-17T00:24:17.770362629Z" level=info msg="RemoveContainer for \"a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372\"" Jan 17 00:24:17.774750 containerd[1466]: time="2026-01-17T00:24:17.774536023Z" level=info msg="RemoveContainer for \"a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372\" returns successfully" Jan 17 00:24:17.775169 kubelet[2620]: I0117 00:24:17.775055 2620 scope.go:117] "RemoveContainer" containerID="781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da" Jan 17 00:24:17.775557 containerd[1466]: time="2026-01-17T00:24:17.775478780Z" level=error msg="ContainerStatus for \"781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da\": not found" Jan 17 00:24:17.775989 kubelet[2620]: E0117 00:24:17.775818 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da\": not found" containerID="781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da" Jan 17 00:24:17.775989 kubelet[2620]: I0117 00:24:17.775895 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da"} err="failed to get container status \"781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da\": rpc error: code = NotFound desc = an error occurred when try to find container \"781b9a72152262b76661505818e0fef6bcd6753b7653685a6b8e64017307d8da\": not found" Jan 17 00:24:17.775989 kubelet[2620]: I0117 00:24:17.775930 2620 scope.go:117] "RemoveContainer" containerID="a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b" Jan 17 00:24:17.776569 containerd[1466]: time="2026-01-17T00:24:17.776289120Z" level=error msg="ContainerStatus for \"a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b\": not found" Jan 17 00:24:17.776936 kubelet[2620]: E0117 00:24:17.776902 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b\": not found" containerID="a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b" Jan 17 00:24:17.777059 kubelet[2620]: I0117 00:24:17.776948 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b"} err="failed to get container status \"a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b\": rpc error: code = NotFound desc = an error occurred when try to find container \"a0df1c0980655e4c89a6d264d9cf3f53e562d8d25ce99a658ffd03fa1267de7b\": not found" Jan 17 00:24:17.777059 kubelet[2620]: I0117 00:24:17.776985 2620 scope.go:117] "RemoveContainer" containerID="610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8" Jan 17 00:24:17.777297 containerd[1466]: time="2026-01-17T00:24:17.777251557Z" level=error msg="ContainerStatus for \"610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8\": not found" Jan 17 00:24:17.777465 kubelet[2620]: E0117 00:24:17.777426 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8\": not found" containerID="610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8" Jan 17 00:24:17.777539 kubelet[2620]: I0117 00:24:17.777469 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8"} err="failed to get container status \"610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8\": rpc error: code = NotFound desc = an error occurred when try to find container \"610837201ed6fd38f746e276e984b399d984fdc8a74eaf8d208d6338c020cba8\": not found" Jan 17 00:24:17.777539 kubelet[2620]: I0117 00:24:17.777497 2620 scope.go:117] "RemoveContainer" containerID="d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043" Jan 17 00:24:17.778230 containerd[1466]: time="2026-01-17T00:24:17.777912964Z" level=error msg="ContainerStatus for \"d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043\": not found" Jan 17 00:24:17.778338 kubelet[2620]: E0117 00:24:17.778096 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043\": not found" containerID="d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043" Jan 17 00:24:17.778338 kubelet[2620]: I0117 00:24:17.778129 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043"} err="failed to get container status \"d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7869cff4e1c0335cf6f966e9ce93375414e54d8feb597ace38ceedf3ee2e043\": not found" Jan 17 00:24:17.778338 kubelet[2620]: I0117 00:24:17.778151 2620 scope.go:117] "RemoveContainer" containerID="a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372" Jan 17 00:24:17.778538 containerd[1466]: time="2026-01-17T00:24:17.778382238Z" level=error msg="ContainerStatus for \"a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372\": not found" Jan 17 00:24:17.778602 kubelet[2620]: E0117 00:24:17.778530 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372\": not found" containerID="a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372" Jan 17 00:24:17.778602 kubelet[2620]: I0117 00:24:17.778560 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372"} err="failed to get container status \"a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6299daa3f79bf09a136a28292859ada16812e64522b5d69dee1b2287727a372\": not found" Jan 17 00:24:17.784613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1bf4566f623d0fb94db15304e1e2e537159e6a991e5af798636c783eb305762-rootfs.mount: Deactivated successfully. Jan 17 00:24:17.784744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0-rootfs.mount: Deactivated successfully. Jan 17 00:24:17.784846 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7faa57b937647a7223c4551706865f1b83c3a44c078c9592c66065c23cd648d0-shm.mount: Deactivated successfully. Jan 17 00:24:17.784955 systemd[1]: var-lib-kubelet-pods-29f22310\x2d78a2\x2d4dd2\x2d9ed3\x2db7cbecd2a977-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 00:24:17.785063 systemd[1]: var-lib-kubelet-pods-6a30224e\x2d4681\x2d468f\x2da56a\x2dbe1f49fb04e1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drw2nr.mount: Deactivated successfully. Jan 17 00:24:17.785177 systemd[1]: var-lib-kubelet-pods-29f22310\x2d78a2\x2d4dd2\x2d9ed3\x2db7cbecd2a977-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4ndq8.mount: Deactivated successfully. Jan 17 00:24:17.785286 systemd[1]: var-lib-kubelet-pods-29f22310\x2d78a2\x2d4dd2\x2d9ed3\x2db7cbecd2a977-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 00:24:18.226263 kubelet[2620]: I0117 00:24:18.226203 2620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29f22310-78a2-4dd2-9ed3-b7cbecd2a977" path="/var/lib/kubelet/pods/29f22310-78a2-4dd2-9ed3-b7cbecd2a977/volumes" Jan 17 00:24:18.227653 kubelet[2620]: I0117 00:24:18.227581 2620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a30224e-4681-468f-a56a-be1f49fb04e1" path="/var/lib/kubelet/pods/6a30224e-4681-468f-a56a-be1f49fb04e1/volumes" Jan 17 00:24:18.719198 sshd[4236]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:18.726990 systemd-logind[1438]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:24:18.727791 systemd[1]: sshd@26-10.128.0.88:22-4.153.228.146:51198.service: Deactivated successfully. Jan 17 00:24:18.733276 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:24:18.733667 systemd[1]: session-27.scope: Consumed 1.172s CPU time. Jan 17 00:24:18.735328 systemd-logind[1438]: Removed session 27. Jan 17 00:24:18.768490 systemd[1]: Started sshd@27-10.128.0.88:22-4.153.228.146:51200.service - OpenSSH per-connection server daemon (4.153.228.146:51200). Jan 17 00:24:19.003523 ntpd[1426]: Deleting interface #11 lxc_health, fe80::e83b:2dff:fead:97ec%8#123, interface stats: received=0, sent=0, dropped=0, active_time=89 secs Jan 17 00:24:19.004285 ntpd[1426]: 17 Jan 00:24:19 ntpd[1426]: Deleting interface #11 lxc_health, fe80::e83b:2dff:fead:97ec%8#123, interface stats: received=0, sent=0, dropped=0, active_time=89 secs Jan 17 00:24:19.029448 sshd[4401]: Accepted publickey for core from 4.153.228.146 port 51200 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:24:19.031802 sshd[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:19.040041 systemd-logind[1438]: New session 28 of user core. Jan 17 00:24:19.050206 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 00:24:20.997159 sshd[4401]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:21.011131 systemd[1]: sshd@27-10.128.0.88:22-4.153.228.146:51200.service: Deactivated successfully. Jan 17 00:24:21.018497 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 00:24:21.019956 systemd[1]: session-28.scope: Consumed 1.721s CPU time. Jan 17 00:24:21.027890 systemd-logind[1438]: Session 28 logged out. Waiting for processes to exit. Jan 17 00:24:21.060114 systemd[1]: Created slice kubepods-burstable-pod22f09019_dc11_49fb_913d_c7c15c328d3b.slice - libcontainer container kubepods-burstable-pod22f09019_dc11_49fb_913d_c7c15c328d3b.slice. Jan 17 00:24:21.064863 kubelet[2620]: I0117 00:24:21.063695 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22f09019-dc11-49fb-913d-c7c15c328d3b-lib-modules\") pod \"cilium-blcqb\" (UID: \"22f09019-dc11-49fb-913d-c7c15c328d3b\") " pod="kube-system/cilium-blcqb" Jan 17 00:24:21.064863 kubelet[2620]: I0117 00:24:21.063757 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/22f09019-dc11-49fb-913d-c7c15c328d3b-cilium-ipsec-secrets\") pod \"cilium-blcqb\" (UID: \"22f09019-dc11-49fb-913d-c7c15c328d3b\") " pod="kube-system/cilium-blcqb" Jan 17 00:24:21.064863 kubelet[2620]: I0117 00:24:21.063787 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22f09019-dc11-49fb-913d-c7c15c328d3b-hostproc\") pod \"cilium-blcqb\" (UID: \"22f09019-dc11-49fb-913d-c7c15c328d3b\") " pod="kube-system/cilium-blcqb" Jan 17 00:24:21.064863 kubelet[2620]: I0117 00:24:21.063818 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22f09019-dc11-49fb-913d-c7c15c328d3b-cni-path\") pod \"cilium-blcqb\" (UID: \"22f09019-dc11-49fb-913d-c7c15c328d3b\") " pod="kube-system/cilium-blcqb" Jan 17 00:24:21.064863 kubelet[2620]: I0117 00:24:21.063907 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22f09019-dc11-49fb-913d-c7c15c328d3b-xtables-lock\") pod \"cilium-blcqb\" (UID: \"22f09019-dc11-49fb-913d-c7c15c328d3b\") " pod="kube-system/cilium-blcqb" Jan 17 00:24:21.064863 kubelet[2620]: I0117 00:24:21.063936 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22f09019-dc11-49fb-913d-c7c15c328d3b-hubble-tls\") pod \"cilium-blcqb\" (UID: \"22f09019-dc11-49fb-913d-c7c15c328d3b\") " pod="kube-system/cilium-blcqb" Jan 17 00:24:21.065717 kubelet[2620]: I0117 00:24:21.063964 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22f09019-dc11-49fb-913d-c7c15c328d3b-clustermesh-secrets\") pod \"cilium-blcqb\" (UID: \"22f09019-dc11-49fb-913d-c7c15c328d3b\") " pod="kube-system/cilium-blcqb" Jan 17 00:24:21.065717 kubelet[2620]: I0117 00:24:21.063987 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22f09019-dc11-49fb-913d-c7c15c328d3b-cilium-config-path\") pod \"cilium-blcqb\" (UID: \"22f09019-dc11-49fb-913d-c7c15c328d3b\") " pod="kube-system/cilium-blcqb" Jan 17 00:24:21.065717 kubelet[2620]: I0117 00:24:21.064012 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22f09019-dc11-49fb-913d-c7c15c328d3b-host-proc-sys-net\") pod \"cilium-blcqb\" (UID: \"22f09019-dc11-49fb-913d-c7c15c328d3b\") " pod="kube-system/cilium-blcqb" Jan 17 00:24:21.065717 kubelet[2620]: I0117 00:24:21.064041 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22f09019-dc11-49fb-913d-c7c15c328d3b-host-proc-sys-kernel\") pod \"cilium-blcqb\" (UID: \"22f09019-dc11-49fb-913d-c7c15c328d3b\") " pod="kube-system/cilium-blcqb" Jan 17 00:24:21.065717 kubelet[2620]: I0117 00:24:21.064068 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htp95\" (UniqueName: \"kubernetes.io/projected/22f09019-dc11-49fb-913d-c7c15c328d3b-kube-api-access-htp95\") pod \"cilium-blcqb\" (UID: \"22f09019-dc11-49fb-913d-c7c15c328d3b\") " pod="kube-system/cilium-blcqb" Jan 17 00:24:21.067187 kubelet[2620]: I0117 00:24:21.064102 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22f09019-dc11-49fb-913d-c7c15c328d3b-cilium-run\") pod \"cilium-blcqb\" (UID: \"22f09019-dc11-49fb-913d-c7c15c328d3b\") " pod="kube-system/cilium-blcqb" Jan 17 00:24:21.067187 kubelet[2620]: I0117 00:24:21.064135 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22f09019-dc11-49fb-913d-c7c15c328d3b-bpf-maps\") pod \"cilium-blcqb\" (UID: \"22f09019-dc11-49fb-913d-c7c15c328d3b\") " pod="kube-system/cilium-blcqb" Jan 17 00:24:21.067187 kubelet[2620]: I0117 00:24:21.064167 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22f09019-dc11-49fb-913d-c7c15c328d3b-cilium-cgroup\") pod \"cilium-blcqb\" (UID: \"22f09019-dc11-49fb-913d-c7c15c328d3b\") " pod="kube-system/cilium-blcqb" Jan 17 00:24:21.067187 kubelet[2620]: I0117 00:24:21.064239 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22f09019-dc11-49fb-913d-c7c15c328d3b-etc-cni-netd\") pod \"cilium-blcqb\" (UID: \"22f09019-dc11-49fb-913d-c7c15c328d3b\") " pod="kube-system/cilium-blcqb" Jan 17 00:24:21.070465 systemd[1]: Started sshd@28-10.128.0.88:22-4.153.228.146:51202.service - OpenSSH per-connection server daemon (4.153.228.146:51202). Jan 17 00:24:21.074771 systemd-logind[1438]: Removed session 28. Jan 17 00:24:21.332266 sshd[4413]: Accepted publickey for core from 4.153.228.146 port 51202 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:24:21.335543 sshd[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:21.347940 systemd-logind[1438]: New session 29 of user core. Jan 17 00:24:21.352126 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 17 00:24:21.381488 containerd[1466]: time="2026-01-17T00:24:21.381398912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-blcqb,Uid:22f09019-dc11-49fb-913d-c7c15c328d3b,Namespace:kube-system,Attempt:0,}" Jan 17 00:24:21.422539 containerd[1466]: time="2026-01-17T00:24:21.422350521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:24:21.422539 containerd[1466]: time="2026-01-17T00:24:21.422441227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:24:21.422539 containerd[1466]: time="2026-01-17T00:24:21.422468397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:24:21.423018 containerd[1466]: time="2026-01-17T00:24:21.422614290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:24:21.457172 systemd[1]: Started cri-containerd-82d217782bbc69a01fcfbf80669556ee6934e29f752ecb2ace5c205fa407fc60.scope - libcontainer container 82d217782bbc69a01fcfbf80669556ee6934e29f752ecb2ace5c205fa407fc60. Jan 17 00:24:21.467176 kubelet[2620]: E0117 00:24:21.467059 2620 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:24:21.501718 containerd[1466]: time="2026-01-17T00:24:21.501443730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-blcqb,Uid:22f09019-dc11-49fb-913d-c7c15c328d3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"82d217782bbc69a01fcfbf80669556ee6934e29f752ecb2ace5c205fa407fc60\"" Jan 17 00:24:21.507206 sshd[4413]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:21.516222 systemd[1]: sshd@28-10.128.0.88:22-4.153.228.146:51202.service: Deactivated successfully. Jan 17 00:24:21.517469 containerd[1466]: time="2026-01-17T00:24:21.517154687Z" level=info msg="CreateContainer within sandbox \"82d217782bbc69a01fcfbf80669556ee6934e29f752ecb2ace5c205fa407fc60\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:24:21.521582 systemd[1]: session-29.scope: Deactivated successfully. Jan 17 00:24:21.527726 systemd-logind[1438]: Session 29 logged out. Waiting for processes to exit. Jan 17 00:24:21.530380 systemd-logind[1438]: Removed session 29. Jan 17 00:24:21.536182 containerd[1466]: time="2026-01-17T00:24:21.536082000Z" level=info msg="CreateContainer within sandbox \"82d217782bbc69a01fcfbf80669556ee6934e29f752ecb2ace5c205fa407fc60\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6b9439ed4e355d77b7dc3c24e634e4bb42f5069981106f9d05d74c8cfaf35ae5\"" Jan 17 00:24:21.538207 containerd[1466]: time="2026-01-17T00:24:21.538148854Z" level=info msg="StartContainer for \"6b9439ed4e355d77b7dc3c24e634e4bb42f5069981106f9d05d74c8cfaf35ae5\"" Jan 17 00:24:21.558034 systemd[1]: Started sshd@29-10.128.0.88:22-4.153.228.146:51214.service - OpenSSH per-connection server daemon (4.153.228.146:51214). Jan 17 00:24:21.593382 systemd[1]: Started cri-containerd-6b9439ed4e355d77b7dc3c24e634e4bb42f5069981106f9d05d74c8cfaf35ae5.scope - libcontainer container 6b9439ed4e355d77b7dc3c24e634e4bb42f5069981106f9d05d74c8cfaf35ae5. Jan 17 00:24:21.646059 containerd[1466]: time="2026-01-17T00:24:21.645989988Z" level=info msg="StartContainer for \"6b9439ed4e355d77b7dc3c24e634e4bb42f5069981106f9d05d74c8cfaf35ae5\" returns successfully" Jan 17 00:24:21.660938 systemd[1]: cri-containerd-6b9439ed4e355d77b7dc3c24e634e4bb42f5069981106f9d05d74c8cfaf35ae5.scope: Deactivated successfully. Jan 17 00:24:21.707259 containerd[1466]: time="2026-01-17T00:24:21.706309626Z" level=info msg="shim disconnected" id=6b9439ed4e355d77b7dc3c24e634e4bb42f5069981106f9d05d74c8cfaf35ae5 namespace=k8s.io Jan 17 00:24:21.707686 containerd[1466]: time="2026-01-17T00:24:21.707642630Z" level=warning msg="cleaning up after shim disconnected" id=6b9439ed4e355d77b7dc3c24e634e4bb42f5069981106f9d05d74c8cfaf35ae5 namespace=k8s.io Jan 17 00:24:21.708089 containerd[1466]: time="2026-01-17T00:24:21.707775453Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:24:21.746445 containerd[1466]: time="2026-01-17T00:24:21.746340877Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:24:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:24:21.806183 sshd[4471]: Accepted publickey for core from 4.153.228.146 port 51214 ssh2: RSA SHA256:1hUTuauZP/CHZgPJDKAapAcfAaclTZKib8tdvTKB8CA Jan 17 00:24:21.807191 sshd[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:21.814952 systemd-logind[1438]: New session 30 of user core. Jan 17 00:24:21.821177 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 17 00:24:22.750049 containerd[1466]: time="2026-01-17T00:24:22.749624241Z" level=info msg="CreateContainer within sandbox \"82d217782bbc69a01fcfbf80669556ee6934e29f752ecb2ace5c205fa407fc60\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:24:22.782971 containerd[1466]: time="2026-01-17T00:24:22.782201243Z" level=info msg="CreateContainer within sandbox \"82d217782bbc69a01fcfbf80669556ee6934e29f752ecb2ace5c205fa407fc60\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2f0cf4047b03e33bdfca16fa38a050cc5a643b8e2590d3cd058f13cbdad8ce92\"" Jan 17 00:24:22.788299 containerd[1466]: time="2026-01-17T00:24:22.787273367Z" level=info msg="StartContainer for \"2f0cf4047b03e33bdfca16fa38a050cc5a643b8e2590d3cd058f13cbdad8ce92\"" Jan 17 00:24:22.859308 systemd[1]: Started cri-containerd-2f0cf4047b03e33bdfca16fa38a050cc5a643b8e2590d3cd058f13cbdad8ce92.scope - libcontainer container 2f0cf4047b03e33bdfca16fa38a050cc5a643b8e2590d3cd058f13cbdad8ce92. Jan 17 00:24:22.911502 containerd[1466]: time="2026-01-17T00:24:22.911411409Z" level=info msg="StartContainer for \"2f0cf4047b03e33bdfca16fa38a050cc5a643b8e2590d3cd058f13cbdad8ce92\" returns successfully" Jan 17 00:24:22.923188 systemd[1]: cri-containerd-2f0cf4047b03e33bdfca16fa38a050cc5a643b8e2590d3cd058f13cbdad8ce92.scope: Deactivated successfully. Jan 17 00:24:22.964062 containerd[1466]: time="2026-01-17T00:24:22.963948966Z" level=info msg="shim disconnected" id=2f0cf4047b03e33bdfca16fa38a050cc5a643b8e2590d3cd058f13cbdad8ce92 namespace=k8s.io Jan 17 00:24:22.964062 containerd[1466]: time="2026-01-17T00:24:22.964038088Z" level=warning msg="cleaning up after shim disconnected" id=2f0cf4047b03e33bdfca16fa38a050cc5a643b8e2590d3cd058f13cbdad8ce92 namespace=k8s.io Jan 17 00:24:22.964062 containerd[1466]: time="2026-01-17T00:24:22.964054774Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:24:23.179757 systemd[1]: run-containerd-runc-k8s.io-2f0cf4047b03e33bdfca16fa38a050cc5a643b8e2590d3cd058f13cbdad8ce92-runc.0BRAny.mount: Deactivated successfully. Jan 17 00:24:23.180551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f0cf4047b03e33bdfca16fa38a050cc5a643b8e2590d3cd058f13cbdad8ce92-rootfs.mount: Deactivated successfully. Jan 17 00:24:23.749412 containerd[1466]: time="2026-01-17T00:24:23.748933543Z" level=info msg="CreateContainer within sandbox \"82d217782bbc69a01fcfbf80669556ee6934e29f752ecb2ace5c205fa407fc60\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:24:23.775077 containerd[1466]: time="2026-01-17T00:24:23.775011787Z" level=info msg="CreateContainer within sandbox \"82d217782bbc69a01fcfbf80669556ee6934e29f752ecb2ace5c205fa407fc60\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4385d6249948c948a5e51b3ff132fc4efc9a47b87e434bab5c573d142b1fa5dd\"" Jan 17 00:24:23.777806 containerd[1466]: time="2026-01-17T00:24:23.777750559Z" level=info msg="StartContainer for \"4385d6249948c948a5e51b3ff132fc4efc9a47b87e434bab5c573d142b1fa5dd\"" Jan 17 00:24:23.832884 systemd[1]: run-containerd-runc-k8s.io-4385d6249948c948a5e51b3ff132fc4efc9a47b87e434bab5c573d142b1fa5dd-runc.d0HPPW.mount: Deactivated successfully. Jan 17 00:24:23.844132 systemd[1]: Started cri-containerd-4385d6249948c948a5e51b3ff132fc4efc9a47b87e434bab5c573d142b1fa5dd.scope - libcontainer container 4385d6249948c948a5e51b3ff132fc4efc9a47b87e434bab5c573d142b1fa5dd. Jan 17 00:24:23.892283 containerd[1466]: time="2026-01-17T00:24:23.892209381Z" level=info msg="StartContainer for \"4385d6249948c948a5e51b3ff132fc4efc9a47b87e434bab5c573d142b1fa5dd\" returns successfully" Jan 17 00:24:23.898321 systemd[1]: cri-containerd-4385d6249948c948a5e51b3ff132fc4efc9a47b87e434bab5c573d142b1fa5dd.scope: Deactivated successfully. Jan 17 00:24:23.937601 containerd[1466]: time="2026-01-17T00:24:23.937509125Z" level=info msg="shim disconnected" id=4385d6249948c948a5e51b3ff132fc4efc9a47b87e434bab5c573d142b1fa5dd namespace=k8s.io Jan 17 00:24:23.937601 containerd[1466]: time="2026-01-17T00:24:23.937597162Z" level=warning msg="cleaning up after shim disconnected" id=4385d6249948c948a5e51b3ff132fc4efc9a47b87e434bab5c573d142b1fa5dd namespace=k8s.io Jan 17 00:24:23.937601 containerd[1466]: time="2026-01-17T00:24:23.937611519Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:24:24.178976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4385d6249948c948a5e51b3ff132fc4efc9a47b87e434bab5c573d142b1fa5dd-rootfs.mount: Deactivated successfully. Jan 17 00:24:24.754284 containerd[1466]: time="2026-01-17T00:24:24.754223568Z" level=info msg="CreateContainer within sandbox \"82d217782bbc69a01fcfbf80669556ee6934e29f752ecb2ace5c205fa407fc60\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:24:24.783547 containerd[1466]: time="2026-01-17T00:24:24.783288473Z" level=info msg="CreateContainer within sandbox \"82d217782bbc69a01fcfbf80669556ee6934e29f752ecb2ace5c205fa407fc60\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"58a7740fb5234422a5f62e123a98c882651b8b802571a916eb8c07e6e62271e5\"" Jan 17 00:24:24.787208 containerd[1466]: time="2026-01-17T00:24:24.787144779Z" level=info msg="StartContainer for \"58a7740fb5234422a5f62e123a98c882651b8b802571a916eb8c07e6e62271e5\"" Jan 17 00:24:24.839206 systemd[1]: Started cri-containerd-58a7740fb5234422a5f62e123a98c882651b8b802571a916eb8c07e6e62271e5.scope - libcontainer container 58a7740fb5234422a5f62e123a98c882651b8b802571a916eb8c07e6e62271e5. Jan 17 00:24:24.885736 systemd[1]: cri-containerd-58a7740fb5234422a5f62e123a98c882651b8b802571a916eb8c07e6e62271e5.scope: Deactivated successfully. Jan 17 00:24:24.892272 containerd[1466]: time="2026-01-17T00:24:24.892056614Z" level=info msg="StartContainer for \"58a7740fb5234422a5f62e123a98c882651b8b802571a916eb8c07e6e62271e5\" returns successfully" Jan 17 00:24:24.932906 containerd[1466]: time="2026-01-17T00:24:24.932788857Z" level=info msg="shim disconnected" id=58a7740fb5234422a5f62e123a98c882651b8b802571a916eb8c07e6e62271e5 namespace=k8s.io Jan 17 00:24:24.932906 containerd[1466]: time="2026-01-17T00:24:24.932909075Z" level=warning msg="cleaning up after shim disconnected" id=58a7740fb5234422a5f62e123a98c882651b8b802571a916eb8c07e6e62271e5 namespace=k8s.io Jan 17 00:24:24.932906 containerd[1466]: time="2026-01-17T00:24:24.932924642Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:24:25.180075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58a7740fb5234422a5f62e123a98c882651b8b802571a916eb8c07e6e62271e5-rootfs.mount: Deactivated successfully. Jan 17 00:24:25.762699 containerd[1466]: time="2026-01-17T00:24:25.762611686Z" level=info msg="CreateContainer within sandbox \"82d217782bbc69a01fcfbf80669556ee6934e29f752ecb2ace5c205fa407fc60\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:24:25.797344 containerd[1466]: time="2026-01-17T00:24:25.797265085Z" level=info msg="CreateContainer within sandbox \"82d217782bbc69a01fcfbf80669556ee6934e29f752ecb2ace5c205fa407fc60\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4b4c1a64d2c38b3d68e49aeca0f5c7c79f5d7c6f9981a7ac61d1050e46cd2780\"" Jan 17 00:24:25.800176 containerd[1466]: time="2026-01-17T00:24:25.800114633Z" level=info msg="StartContainer for \"4b4c1a64d2c38b3d68e49aeca0f5c7c79f5d7c6f9981a7ac61d1050e46cd2780\"" Jan 17 00:24:25.856216 systemd[1]: Started cri-containerd-4b4c1a64d2c38b3d68e49aeca0f5c7c79f5d7c6f9981a7ac61d1050e46cd2780.scope - libcontainer container 4b4c1a64d2c38b3d68e49aeca0f5c7c79f5d7c6f9981a7ac61d1050e46cd2780. Jan 17 00:24:25.917006 containerd[1466]: time="2026-01-17T00:24:25.916876374Z" level=info msg="StartContainer for \"4b4c1a64d2c38b3d68e49aeca0f5c7c79f5d7c6f9981a7ac61d1050e46cd2780\" returns successfully" Jan 17 00:24:26.532911 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 00:24:28.331172 systemd[1]: run-containerd-runc-k8s.io-4b4c1a64d2c38b3d68e49aeca0f5c7c79f5d7c6f9981a7ac61d1050e46cd2780-runc.5E38Uq.mount: Deactivated successfully. Jan 17 00:24:30.269118 systemd-networkd[1346]: lxc_health: Link UP Jan 17 00:24:30.317780 systemd-networkd[1346]: lxc_health: Gained carrier Jan 17 00:24:30.601854 systemd[1]: run-containerd-runc-k8s.io-4b4c1a64d2c38b3d68e49aeca0f5c7c79f5d7c6f9981a7ac61d1050e46cd2780-runc.yXQsUn.mount: Deactivated successfully. Jan 17 00:24:31.411325 systemd-networkd[1346]: lxc_health: Gained IPv6LL Jan 17 00:24:31.425100 kubelet[2620]: I0117 00:24:31.424079 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-blcqb" podStartSLOduration=11.424019133 podStartE2EDuration="11.424019133s" podCreationTimestamp="2026-01-17 00:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:24:26.8085358 +0000 UTC m=+130.854334521" watchObservedRunningTime="2026-01-17 00:24:31.424019133 +0000 UTC m=+135.469817844" Jan 17 00:24:33.014315 systemd[1]: run-containerd-runc-k8s.io-4b4c1a64d2c38b3d68e49aeca0f5c7c79f5d7c6f9981a7ac61d1050e46cd2780-runc.tgl5aA.mount: Deactivated successfully. Jan 17 00:24:34.003679 ntpd[1426]: Listen normally on 14 lxc_health [fe80::ef:bbff:fe4f:fe23%14]:123 Jan 17 00:24:34.004491 ntpd[1426]: 17 Jan 00:24:34 ntpd[1426]: Listen normally on 14 lxc_health [fe80::ef:bbff:fe4f:fe23%14]:123 Jan 17 00:24:35.292744 systemd[1]: run-containerd-runc-k8s.io-4b4c1a64d2c38b3d68e49aeca0f5c7c79f5d7c6f9981a7ac61d1050e46cd2780-runc.N0GkDv.mount: Deactivated successfully. Jan 17 00:24:37.553911 systemd[1]: run-containerd-runc-k8s.io-4b4c1a64d2c38b3d68e49aeca0f5c7c79f5d7c6f9981a7ac61d1050e46cd2780-runc.3sdTN6.mount: Deactivated successfully. Jan 17 00:24:37.719181 sshd[4471]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:37.725624 systemd[1]: sshd@29-10.128.0.88:22-4.153.228.146:51214.service: Deactivated successfully. Jan 17 00:24:37.731155 systemd[1]: session-30.scope: Deactivated successfully. Jan 17 00:24:37.735891 systemd-logind[1438]: Session 30 logged out. Waiting for processes to exit. Jan 17 00:24:37.738889 systemd-logind[1438]: Removed session 30.