Mar 13 00:38:46.170331 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:38:46.170373 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:38:46.170394 kernel: BIOS-provided physical RAM map: Mar 13 00:38:46.170409 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Mar 13 00:38:46.170423 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Mar 13 00:38:46.170435 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Mar 13 00:38:46.170452 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Mar 13 00:38:46.170467 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Mar 13 00:38:46.170481 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd2e4fff] usable Mar 13 00:38:46.170499 kernel: BIOS-e820: [mem 0x00000000bd2e5000-0x00000000bd2eefff] ACPI data Mar 13 00:38:46.170519 kernel: BIOS-e820: [mem 0x00000000bd2ef000-0x00000000bf8ecfff] usable Mar 13 00:38:46.170533 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Mar 13 00:38:46.170556 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Mar 13 00:38:46.170572 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Mar 13 00:38:46.170591 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Mar 13 00:38:46.170612 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Mar 13 00:38:46.172673 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Mar 13 00:38:46.172699 kernel: NX (Execute Disable) protection: active Mar 13 00:38:46.172716 kernel: APIC: Static calls initialized Mar 13 00:38:46.172733 kernel: efi: EFI v2.7 by EDK II Mar 13 00:38:46.172752 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 RNG=0xbfb73018 TPMEventLog=0xbd2e5018 Mar 13 00:38:46.172768 kernel: random: crng init done Mar 13 00:38:46.172784 kernel: secureboot: Secure boot disabled Mar 13 00:38:46.172799 kernel: SMBIOS 2.4 present. Mar 13 00:38:46.172814 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Mar 13 00:38:46.172837 kernel: DMI: Memory slots populated: 1/1 Mar 13 00:38:46.172853 kernel: Hypervisor detected: KVM Mar 13 00:38:46.172868 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Mar 13 00:38:46.172882 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 13 00:38:46.172897 kernel: kvm-clock: using sched offset of 15671371896 cycles Mar 13 00:38:46.172914 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 00:38:46.172929 kernel: tsc: Detected 2299.998 MHz processor Mar 13 00:38:46.172945 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:38:46.172961 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:38:46.172977 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Mar 13 00:38:46.172997 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Mar 13 00:38:46.173014 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:38:46.173031 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Mar 13 00:38:46.173047 kernel: Using GB pages for direct mapping Mar 13 00:38:46.173064 kernel: ACPI: Early table checksum verification disabled Mar 13 00:38:46.173087 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Mar 13 00:38:46.173105 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Mar 13 00:38:46.173126 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Mar 13 00:38:46.173143 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Mar 13 00:38:46.173159 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Mar 13 00:38:46.173177 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Mar 13 00:38:46.173193 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Mar 13 00:38:46.173210 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Mar 13 00:38:46.173226 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Mar 13 00:38:46.173245 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Mar 13 00:38:46.173262 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Mar 13 00:38:46.173278 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Mar 13 00:38:46.173295 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Mar 13 00:38:46.173311 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Mar 13 00:38:46.173327 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Mar 13 00:38:46.173343 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Mar 13 00:38:46.173360 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Mar 13 00:38:46.173376 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Mar 13 00:38:46.173406 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Mar 13 00:38:46.173422 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Mar 13 00:38:46.173439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 13 00:38:46.173455 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Mar 13 00:38:46.173472 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Mar 13 00:38:46.173488 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Mar 13 00:38:46.173506 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Mar 13 00:38:46.173523 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Mar 13 00:38:46.173540 kernel: Zone ranges: Mar 13 00:38:46.173561 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:38:46.173578 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 13 00:38:46.173594 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Mar 13 00:38:46.173611 kernel: Device empty Mar 13 00:38:46.173717 kernel: Movable zone start for each node Mar 13 00:38:46.173735 kernel: Early memory node ranges Mar 13 00:38:46.173753 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Mar 13 00:38:46.173771 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Mar 13 00:38:46.173789 kernel: node 0: [mem 0x0000000000100000-0x00000000bd2e4fff] Mar 13 00:38:46.173813 kernel: node 0: [mem 0x00000000bd2ef000-0x00000000bf8ecfff] Mar 13 00:38:46.173830 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Mar 13 00:38:46.173848 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Mar 13 00:38:46.173866 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Mar 13 00:38:46.173884 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:38:46.173902 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Mar 13 00:38:46.173920 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Mar 13 00:38:46.173938 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Mar 13 00:38:46.173956 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 13 00:38:46.173977 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Mar 13 00:38:46.173995 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 13 00:38:46.174013 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 13 00:38:46.174031 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 13 00:38:46.174049 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 13 00:38:46.174067 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:38:46.174085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 13 00:38:46.174103 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 13 00:38:46.174120 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:38:46.174142 kernel: CPU topo: Max. logical packages: 1 Mar 13 00:38:46.174158 kernel: CPU topo: Max. logical dies: 1 Mar 13 00:38:46.174175 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:38:46.174193 kernel: CPU topo: Max. threads per core: 2 Mar 13 00:38:46.174211 kernel: CPU topo: Num. cores per package: 1 Mar 13 00:38:46.174229 kernel: CPU topo: Num. threads per package: 2 Mar 13 00:38:46.174247 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Mar 13 00:38:46.174265 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 13 00:38:46.174282 kernel: Booting paravirtualized kernel on KVM Mar 13 00:38:46.174301 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:38:46.174322 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 13 00:38:46.174341 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Mar 13 00:38:46.174358 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Mar 13 00:38:46.174376 kernel: pcpu-alloc: [0] 0 1 Mar 13 00:38:46.174400 kernel: kvm-guest: PV spinlocks enabled Mar 13 00:38:46.174418 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 00:38:46.174438 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:38:46.174456 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 13 00:38:46.174478 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 00:38:46.174496 kernel: Fallback order for Node 0: 0 Mar 13 00:38:46.174514 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965136 Mar 13 00:38:46.174532 kernel: Policy zone: Normal Mar 13 00:38:46.174550 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:38:46.174568 kernel: software IO TLB: area num 2. Mar 13 00:38:46.174599 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 00:38:46.174891 kernel: Kernel/User page tables isolation: enabled Mar 13 00:38:46.174916 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:38:46.174936 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:38:46.174955 kernel: Dynamic Preempt: voluntary Mar 13 00:38:46.174974 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:38:46.175141 kernel: rcu: RCU event tracing is enabled. Mar 13 00:38:46.175160 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 00:38:46.175180 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:38:46.175199 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:38:46.175219 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:38:46.175379 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:38:46.175405 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 00:38:46.175424 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:38:46.175444 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:38:46.175463 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:38:46.175483 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 13 00:38:46.175502 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:38:46.175521 kernel: Console: colour dummy device 80x25 Mar 13 00:38:46.175545 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:38:46.175565 kernel: ACPI: Core revision 20240827 Mar 13 00:38:46.175583 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:38:46.175793 kernel: x2apic enabled Mar 13 00:38:46.175810 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:38:46.175827 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Mar 13 00:38:46.175844 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 13 00:38:46.175861 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Mar 13 00:38:46.175878 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Mar 13 00:38:46.175896 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Mar 13 00:38:46.175919 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:38:46.175938 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Mar 13 00:38:46.176053 kernel: Spectre V2 : Mitigation: IBRS Mar 13 00:38:46.176071 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 13 00:38:46.176089 kernel: RETBleed: Mitigation: IBRS Mar 13 00:38:46.176106 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 13 00:38:46.176124 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Mar 13 00:38:46.176142 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 13 00:38:46.176164 kernel: MDS: Mitigation: Clear CPU buffers Mar 13 00:38:46.176181 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 13 00:38:46.176199 kernel: active return thunk: its_return_thunk Mar 13 00:38:46.176218 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 13 00:38:46.176236 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:38:46.176254 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:38:46.176272 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:38:46.176291 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:38:46.176309 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 13 00:38:46.176332 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:38:46.176350 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:38:46.176368 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:38:46.176386 kernel: landlock: Up and running. Mar 13 00:38:46.176411 kernel: SELinux: Initializing. Mar 13 00:38:46.176430 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 13 00:38:46.176448 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 13 00:38:46.176466 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Mar 13 00:38:46.176485 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Mar 13 00:38:46.176506 kernel: signal: max sigframe size: 1776 Mar 13 00:38:46.176524 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:38:46.176542 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:38:46.176559 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:38:46.176577 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 13 00:38:46.176595 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:38:46.176612 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:38:46.176667 kernel: .... node #0, CPUs: #1 Mar 13 00:38:46.176687 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 13 00:38:46.176712 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 13 00:38:46.176730 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 00:38:46.176748 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Mar 13 00:38:46.176767 kernel: Memory: 7555816K/7860544K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 298900K reserved, 0K cma-reserved) Mar 13 00:38:46.176785 kernel: devtmpfs: initialized Mar 13 00:38:46.176802 kernel: x86/mm: Memory block size: 128MB Mar 13 00:38:46.176819 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Mar 13 00:38:46.176837 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:38:46.176859 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 00:38:46.176876 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:38:46.176894 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:38:46.176912 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:38:46.176930 kernel: audit: type=2000 audit(1773362320.899:1): state=initialized audit_enabled=0 res=1 Mar 13 00:38:46.176947 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:38:46.176965 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:38:46.176982 kernel: cpuidle: using governor menu Mar 13 00:38:46.177000 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:38:46.177023 kernel: dca service started, version 1.12.1 Mar 13 00:38:46.177041 kernel: PCI: Using configuration type 1 for base access Mar 13 00:38:46.177059 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:38:46.177077 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:38:46.177095 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:38:46.177113 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:38:46.177130 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:38:46.177148 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:38:46.177164 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:38:46.177187 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:38:46.177206 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 13 00:38:46.177223 kernel: ACPI: Interpreter enabled Mar 13 00:38:46.177240 kernel: ACPI: PM: (supports S0 S3 S5) Mar 13 00:38:46.177257 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:38:46.177273 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:38:46.177291 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 13 00:38:46.177309 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Mar 13 00:38:46.177325 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 00:38:46.177608 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 13 00:38:46.178885 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 13 00:38:46.179086 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 13 00:38:46.179109 kernel: PCI host bridge to bus 0000:00 Mar 13 00:38:46.179284 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 13 00:38:46.179454 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 13 00:38:46.181876 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 13 00:38:46.182073 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Mar 13 00:38:46.182248 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 00:38:46.182461 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Mar 13 00:38:46.182742 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Mar 13 00:38:46.182963 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Mar 13 00:38:46.183156 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 13 00:38:46.183368 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Mar 13 00:38:46.184760 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Mar 13 00:38:46.184970 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Mar 13 00:38:46.185302 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 13 00:38:46.187689 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Mar 13 00:38:46.188041 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Mar 13 00:38:46.188508 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 13 00:38:46.188867 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Mar 13 00:38:46.189075 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Mar 13 00:38:46.189101 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 13 00:38:46.189120 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 13 00:38:46.189138 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 13 00:38:46.189155 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 13 00:38:46.189173 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 13 00:38:46.189201 kernel: iommu: Default domain type: Translated Mar 13 00:38:46.189218 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:38:46.189236 kernel: efivars: Registered efivars operations Mar 13 00:38:46.189254 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:38:46.189272 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 13 00:38:46.189291 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Mar 13 00:38:46.189308 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Mar 13 00:38:46.189326 kernel: e820: reserve RAM buffer [mem 0xbd2e5000-0xbfffffff] Mar 13 00:38:46.189343 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Mar 13 00:38:46.189375 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Mar 13 00:38:46.189394 kernel: vgaarb: loaded Mar 13 00:38:46.189413 kernel: clocksource: Switched to clocksource kvm-clock Mar 13 00:38:46.189431 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:38:46.189448 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:38:46.189467 kernel: pnp: PnP ACPI init Mar 13 00:38:46.189483 kernel: pnp: PnP ACPI: found 7 devices Mar 13 00:38:46.189503 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:38:46.189522 kernel: NET: Registered PF_INET protocol family Mar 13 00:38:46.189542 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 13 00:38:46.189559 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 13 00:38:46.189577 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:38:46.189595 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 00:38:46.189612 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 13 00:38:46.189702 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 13 00:38:46.189721 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 13 00:38:46.189740 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 13 00:38:46.189758 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:38:46.189780 kernel: NET: Registered PF_XDP protocol family Mar 13 00:38:46.189979 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 13 00:38:46.190151 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 13 00:38:46.190319 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 13 00:38:46.190496 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Mar 13 00:38:46.192834 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 13 00:38:46.192874 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:38:46.192902 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 13 00:38:46.192921 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Mar 13 00:38:46.192940 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 13 00:38:46.192960 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 13 00:38:46.192979 kernel: clocksource: Switched to clocksource tsc Mar 13 00:38:46.192998 kernel: Initialise system trusted keyrings Mar 13 00:38:46.193017 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 13 00:38:46.193036 kernel: Key type asymmetric registered Mar 13 00:38:46.193054 kernel: Asymmetric key parser 'x509' registered Mar 13 00:38:46.193077 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:38:46.193096 kernel: io scheduler mq-deadline registered Mar 13 00:38:46.193114 kernel: io scheduler kyber registered Mar 13 00:38:46.193133 kernel: io scheduler bfq registered Mar 13 00:38:46.193151 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:38:46.193172 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 13 00:38:46.193381 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Mar 13 00:38:46.193407 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Mar 13 00:38:46.193600 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Mar 13 00:38:46.194669 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 13 00:38:46.194893 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Mar 13 00:38:46.194921 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:38:46.194941 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:38:46.194960 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 13 00:38:46.194980 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Mar 13 00:38:46.194999 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Mar 13 00:38:46.195212 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Mar 13 00:38:46.195245 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 13 00:38:46.195264 kernel: i8042: Warning: Keylock active Mar 13 00:38:46.195284 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 13 00:38:46.195303 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 13 00:38:46.195503 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 13 00:38:46.197745 kernel: rtc_cmos 00:00: registered as rtc0 Mar 13 00:38:46.197945 kernel: rtc_cmos 00:00: setting system clock to 2026-03-13T00:38:45 UTC (1773362325) Mar 13 00:38:46.198128 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 13 00:38:46.198159 kernel: intel_pstate: CPU model not supported Mar 13 00:38:46.198178 kernel: pstore: Using crash dump compression: deflate Mar 13 00:38:46.198197 kernel: pstore: Registered efi_pstore as persistent store backend Mar 13 00:38:46.198217 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:38:46.198236 kernel: Segment Routing with IPv6 Mar 13 00:38:46.198256 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:38:46.198274 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:38:46.198293 kernel: Key type dns_resolver registered Mar 13 00:38:46.198312 kernel: IPI shorthand broadcast: enabled Mar 13 00:38:46.198337 kernel: sched_clock: Marking stable (4031004150, 972836234)->(5365904851, -362064467) Mar 13 00:38:46.198364 kernel: registered taskstats version 1 Mar 13 00:38:46.198383 kernel: Loading compiled-in X.509 certificates Mar 13 00:38:46.198402 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:38:46.198419 kernel: Demotion targets for Node 0: null Mar 13 00:38:46.198438 kernel: Key type .fscrypt registered Mar 13 00:38:46.198456 kernel: Key type fscrypt-provisioning registered Mar 13 00:38:46.198475 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:38:46.198494 kernel: ima: No architecture policies found Mar 13 00:38:46.198517 kernel: clk: Disabling unused clocks Mar 13 00:38:46.198536 kernel: Warning: unable to open an initial console. Mar 13 00:38:46.198555 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:38:46.198574 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 13 00:38:46.198593 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:38:46.198612 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:38:46.198656 kernel: Run /init as init process Mar 13 00:38:46.198673 kernel: with arguments: Mar 13 00:38:46.198691 kernel: /init Mar 13 00:38:46.198713 kernel: with environment: Mar 13 00:38:46.198729 kernel: HOME=/ Mar 13 00:38:46.198747 kernel: TERM=linux Mar 13 00:38:46.198765 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:38:46.198789 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:38:46.198811 systemd[1]: Detected virtualization google. Mar 13 00:38:46.198830 systemd[1]: Detected architecture x86-64. Mar 13 00:38:46.198854 systemd[1]: Running in initrd. Mar 13 00:38:46.198874 systemd[1]: No hostname configured, using default hostname. Mar 13 00:38:46.198895 systemd[1]: Hostname set to . Mar 13 00:38:46.198915 systemd[1]: Initializing machine ID from random generator. Mar 13 00:38:46.198935 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:38:46.198955 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:38:46.198995 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:38:46.199020 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:38:46.199061 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:38:46.199082 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:38:46.199105 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:38:46.199126 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:38:46.199151 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:38:46.199172 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:38:46.199194 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:38:46.199215 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:38:46.199235 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:38:46.199256 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:38:46.199277 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:38:46.199298 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:38:46.199319 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:38:46.199345 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:38:46.199374 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:38:46.199396 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:38:46.199417 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:38:46.199436 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:38:46.199458 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:38:46.199479 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:38:46.199511 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:38:46.199535 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:38:46.199560 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:38:46.199581 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:38:46.199603 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:38:46.200464 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:38:46.200494 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:38:46.200516 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:38:46.200579 systemd-journald[192]: Collecting audit messages is disabled. Mar 13 00:38:46.200644 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:38:46.200670 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:38:46.200693 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:38:46.200718 systemd-journald[192]: Journal started Mar 13 00:38:46.200763 systemd-journald[192]: Runtime Journal (/run/log/journal/efc99b99227644b4aaba60470c2ed7c1) is 8M, max 148.6M, 140.6M free. Mar 13 00:38:46.177745 systemd-modules-load[193]: Inserted module 'overlay' Mar 13 00:38:46.203896 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:38:46.217824 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:38:46.225130 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:38:46.238240 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:38:46.244763 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:38:46.244803 kernel: Bridge firewalling registered Mar 13 00:38:46.244637 systemd-modules-load[193]: Inserted module 'br_netfilter' Mar 13 00:38:46.247568 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:38:46.248161 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:38:46.251949 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:38:46.256732 systemd-tmpfiles[206]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:38:46.265499 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:38:46.271079 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:38:46.291829 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:38:46.293200 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:38:46.299114 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:38:46.306047 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:38:46.313289 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:38:46.353801 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:38:46.364453 systemd-resolved[229]: Positive Trust Anchors: Mar 13 00:38:46.364471 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:38:46.364540 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:38:46.373177 systemd-resolved[229]: Defaulting to hostname 'linux'. Mar 13 00:38:46.377399 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:38:46.387260 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:38:46.471667 kernel: SCSI subsystem initialized Mar 13 00:38:46.486655 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:38:46.498671 kernel: iscsi: registered transport (tcp) Mar 13 00:38:46.525235 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:38:46.525539 kernel: QLogic iSCSI HBA Driver Mar 13 00:38:46.554275 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:38:46.579529 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:38:46.586825 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:38:46.652976 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:38:46.655067 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:38:46.721673 kernel: raid6: avx2x4 gen() 17884 MB/s Mar 13 00:38:46.738669 kernel: raid6: avx2x2 gen() 18263 MB/s Mar 13 00:38:46.756181 kernel: raid6: avx2x1 gen() 13918 MB/s Mar 13 00:38:46.756372 kernel: raid6: using algorithm avx2x2 gen() 18263 MB/s Mar 13 00:38:46.774141 kernel: raid6: .... xor() 18049 MB/s, rmw enabled Mar 13 00:38:46.774229 kernel: raid6: using avx2x2 recovery algorithm Mar 13 00:38:46.798744 kernel: xor: automatically using best checksumming function avx Mar 13 00:38:46.985733 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:38:46.994742 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:38:47.001447 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:38:47.035713 systemd-udevd[439]: Using default interface naming scheme 'v255'. Mar 13 00:38:47.046106 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:38:47.052023 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:38:47.086012 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Mar 13 00:38:47.123804 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:38:47.127202 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:38:47.226269 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:38:47.235948 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:38:47.329670 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:38:47.345659 kernel: AES CTR mode by8 optimization enabled Mar 13 00:38:47.377751 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Mar 13 00:38:47.393702 kernel: scsi host0: Virtio SCSI HBA Mar 13 00:38:47.393805 kernel: blk-mq: reduced tag depth to 10240 Mar 13 00:38:47.404698 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Mar 13 00:38:47.469661 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 13 00:38:47.503183 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:38:47.503546 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:38:47.514680 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Mar 13 00:38:47.515026 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Mar 13 00:38:47.515266 kernel: sd 0:0:1:0: [sda] Write Protect is off Mar 13 00:38:47.515509 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Mar 13 00:38:47.515763 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 13 00:38:47.517585 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:38:47.524348 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:38:47.539252 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 00:38:47.539291 kernel: GPT:17805311 != 33554431 Mar 13 00:38:47.539315 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 00:38:47.539340 kernel: GPT:17805311 != 33554431 Mar 13 00:38:47.539361 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 00:38:47.539389 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:38:47.539412 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Mar 13 00:38:47.528487 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:38:47.577772 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:38:47.646889 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Mar 13 00:38:47.647530 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:38:47.680823 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Mar 13 00:38:47.693194 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Mar 13 00:38:47.693491 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Mar 13 00:38:47.716539 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 13 00:38:47.719940 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:38:47.724817 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:38:47.729827 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:38:47.737140 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:38:47.756937 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:38:47.792183 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:38:47.795653 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:38:47.796375 disk-uuid[590]: Primary Header is updated. Mar 13 00:38:47.796375 disk-uuid[590]: Secondary Entries is updated. Mar 13 00:38:47.796375 disk-uuid[590]: Secondary Header is updated. Mar 13 00:38:48.838669 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:38:48.839564 disk-uuid[598]: The operation has completed successfully. Mar 13 00:38:48.933922 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:38:48.934146 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:38:48.979710 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:38:49.002602 sh[612]: Success Mar 13 00:38:49.026159 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:38:49.026379 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:38:49.027811 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:38:49.042665 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Mar 13 00:38:49.138986 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:38:49.144754 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:38:49.160186 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:38:49.185711 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (624) Mar 13 00:38:49.189701 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:38:49.189774 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:38:49.224651 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 13 00:38:49.224744 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:38:49.224784 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:38:49.230755 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:38:49.232272 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:38:49.235277 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:38:49.237405 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:38:49.248292 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:38:49.302153 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (659) Mar 13 00:38:49.305510 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:38:49.305576 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:38:49.318921 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:38:49.319006 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:38:49.319040 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:38:49.327687 kernel: BTRFS info (device sda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:38:49.331946 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:38:49.341867 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:38:49.451524 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:38:49.459726 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:38:49.602862 systemd-networkd[795]: lo: Link UP Mar 13 00:38:49.603747 systemd-networkd[795]: lo: Gained carrier Mar 13 00:38:49.607934 systemd-networkd[795]: Enumeration completed Mar 13 00:38:49.608104 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:38:49.609134 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:38:49.609141 systemd-networkd[795]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:38:49.615087 ignition[724]: Ignition 2.22.0 Mar 13 00:38:49.611995 systemd-networkd[795]: eth0: Link UP Mar 13 00:38:49.615098 ignition[724]: Stage: fetch-offline Mar 13 00:38:49.612536 systemd-networkd[795]: eth0: Gained carrier Mar 13 00:38:49.615148 ignition[724]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:38:49.612557 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:38:49.615163 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 13 00:38:49.618177 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:38:49.615308 ignition[724]: parsed url from cmdline: "" Mar 13 00:38:49.623639 systemd[1]: Reached target network.target - Network. Mar 13 00:38:49.615313 ignition[724]: no config URL provided Mar 13 00:38:49.628757 systemd-networkd[795]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136.c.flatcar-212911.internal' to 'ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136' Mar 13 00:38:49.615320 ignition[724]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:38:49.628778 systemd-networkd[795]: eth0: DHCPv4 address 10.128.0.75/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 13 00:38:49.615329 ignition[724]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:38:49.634264 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 00:38:49.615339 ignition[724]: failed to fetch config: resource requires networking Mar 13 00:38:49.615584 ignition[724]: Ignition finished successfully Mar 13 00:38:49.686367 ignition[805]: Ignition 2.22.0 Mar 13 00:38:49.686377 ignition[805]: Stage: fetch Mar 13 00:38:49.686669 ignition[805]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:38:49.686688 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 13 00:38:49.686855 ignition[805]: parsed url from cmdline: "" Mar 13 00:38:49.709710 unknown[805]: fetched base config from "system" Mar 13 00:38:49.686860 ignition[805]: no config URL provided Mar 13 00:38:49.709746 unknown[805]: fetched base config from "system" Mar 13 00:38:49.686868 ignition[805]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:38:49.709925 unknown[805]: fetched user config from "gcp" Mar 13 00:38:49.686879 ignition[805]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:38:49.714170 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 00:38:49.686922 ignition[805]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Mar 13 00:38:49.720330 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:38:49.694169 ignition[805]: GET result: OK Mar 13 00:38:49.694501 ignition[805]: parsing config with SHA512: aba9adbdf2ed53505bfd006b753788717991d9ffdc8c936095442f28a1591cfd930edd9d24e4ab1e5bdd100ccf2b84b1e68b9609792750231d05e64591826b96 Mar 13 00:38:49.710492 ignition[805]: fetch: fetch complete Mar 13 00:38:49.710501 ignition[805]: fetch: fetch passed Mar 13 00:38:49.710601 ignition[805]: Ignition finished successfully Mar 13 00:38:49.773539 ignition[811]: Ignition 2.22.0 Mar 13 00:38:49.773567 ignition[811]: Stage: kargs Mar 13 00:38:49.773855 ignition[811]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:38:49.778485 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:38:49.773874 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 13 00:38:49.781206 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:38:49.775343 ignition[811]: kargs: kargs passed Mar 13 00:38:49.775458 ignition[811]: Ignition finished successfully Mar 13 00:38:49.826274 ignition[817]: Ignition 2.22.0 Mar 13 00:38:49.826292 ignition[817]: Stage: disks Mar 13 00:38:49.829880 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:38:49.826555 ignition[817]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:38:49.826572 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 13 00:38:49.836364 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:38:49.827758 ignition[817]: disks: disks passed Mar 13 00:38:49.841823 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:38:49.827835 ignition[817]: Ignition finished successfully Mar 13 00:38:49.848860 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:38:49.852192 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:38:49.857933 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:38:49.865264 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:38:49.912033 systemd-fsck[826]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Mar 13 00:38:49.926174 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:38:49.933256 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:38:50.135652 kernel: EXT4-fs (sda9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:38:50.135726 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:38:50.139525 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:38:50.147433 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:38:50.161302 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:38:50.167300 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 13 00:38:50.167422 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:38:50.167474 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:38:50.185727 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (834) Mar 13 00:38:50.188655 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:38:50.193718 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:38:50.193698 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:38:50.197043 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:38:50.205402 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:38:50.205446 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:38:50.205468 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:38:50.207909 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:38:50.336712 initrd-setup-root[858]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:38:50.345460 initrd-setup-root[865]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:38:50.354519 initrd-setup-root[872]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:38:50.361167 initrd-setup-root[879]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:38:50.532931 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:38:50.536335 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:38:50.553878 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:38:50.566360 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:38:50.570719 kernel: BTRFS info (device sda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:38:50.614965 ignition[946]: INFO : Ignition 2.22.0 Mar 13 00:38:50.614965 ignition[946]: INFO : Stage: mount Mar 13 00:38:50.621089 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:38:50.621089 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 13 00:38:50.621089 ignition[946]: INFO : mount: mount passed Mar 13 00:38:50.621089 ignition[946]: INFO : Ignition finished successfully Mar 13 00:38:50.621781 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:38:50.625910 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:38:50.633385 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:38:50.661184 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:38:50.693891 systemd-networkd[795]: eth0: Gained IPv6LL Mar 13 00:38:50.700542 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (958) Mar 13 00:38:50.700581 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:38:50.700618 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:38:50.710586 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:38:50.710696 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:38:50.710721 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:38:50.714132 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:38:50.764318 ignition[975]: INFO : Ignition 2.22.0 Mar 13 00:38:50.764318 ignition[975]: INFO : Stage: files Mar 13 00:38:50.771804 ignition[975]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:38:50.771804 ignition[975]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 13 00:38:50.771804 ignition[975]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:38:50.771804 ignition[975]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:38:50.771804 ignition[975]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:38:50.791863 ignition[975]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:38:50.791863 ignition[975]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:38:50.791863 ignition[975]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:38:50.791863 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:38:50.791863 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:38:50.774725 unknown[975]: wrote ssh authorized keys file for user: core Mar 13 00:38:50.894749 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:38:51.030333 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:38:51.035988 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 13 00:38:51.035988 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 13 00:38:51.269958 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 13 00:38:51.436648 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 13 00:38:51.442228 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:38:51.442228 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:38:51.442228 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:38:51.442228 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:38:51.442228 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:38:51.442228 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:38:51.442228 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:38:51.442228 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:38:51.442228 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:38:51.442228 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:38:51.442228 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:38:51.492779 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:38:51.492779 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:38:51.492779 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 13 00:38:51.757098 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 13 00:38:52.506733 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:38:52.511969 ignition[975]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 13 00:38:52.511969 ignition[975]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:38:52.523779 ignition[975]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:38:52.523779 ignition[975]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 13 00:38:52.523779 ignition[975]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:38:52.523779 ignition[975]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:38:52.523779 ignition[975]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:38:52.523779 ignition[975]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:38:52.523779 ignition[975]: INFO : files: files passed Mar 13 00:38:52.523779 ignition[975]: INFO : Ignition finished successfully Mar 13 00:38:52.516046 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:38:52.521931 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:38:52.527957 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:38:52.563091 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:38:52.563228 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:38:52.584102 initrd-setup-root-after-ignition[1009]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:38:52.588834 initrd-setup-root-after-ignition[1005]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:38:52.588834 initrd-setup-root-after-ignition[1005]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:38:52.586480 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:38:52.593402 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:38:52.599542 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:38:52.674832 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:38:52.675029 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:38:52.681696 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:38:52.685094 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:38:52.693243 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:38:52.695122 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:38:52.734315 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:38:52.744125 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:38:52.787614 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:38:52.794201 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:38:52.800111 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:38:52.800588 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:38:52.800841 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:38:52.814167 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:38:52.814539 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:38:52.820279 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:38:52.825354 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:38:52.831222 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:38:52.842060 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:38:52.842473 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:38:52.848357 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:38:52.854260 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:38:52.860026 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:38:52.865339 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:38:52.870733 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:38:52.870977 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:38:52.881236 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:38:52.891028 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:38:52.898014 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:38:52.898450 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:38:52.902340 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:38:52.902565 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:38:52.913216 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:38:52.913574 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:38:52.917392 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:38:52.917894 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:38:52.925471 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:38:52.935189 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:38:52.940078 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:38:52.940390 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:38:52.944256 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:38:52.944543 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:38:52.967877 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:38:52.968203 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:38:52.981092 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:38:52.983805 ignition[1029]: INFO : Ignition 2.22.0 Mar 13 00:38:52.983805 ignition[1029]: INFO : Stage: umount Mar 13 00:38:52.983805 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:38:52.983805 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 13 00:38:53.001939 ignition[1029]: INFO : umount: umount passed Mar 13 00:38:53.001939 ignition[1029]: INFO : Ignition finished successfully Mar 13 00:38:52.987322 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:38:52.987664 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:38:52.998379 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:38:52.998545 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:38:53.005343 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:38:53.005494 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:38:53.010899 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:38:53.010991 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:38:53.014011 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 00:38:53.014085 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 00:38:53.021162 systemd[1]: Stopped target network.target - Network. Mar 13 00:38:53.024046 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:38:53.024390 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:38:53.031932 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:38:53.036914 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:38:53.037056 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:38:53.044920 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:38:53.048869 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:38:53.054004 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:38:53.054097 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:38:53.058976 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:38:53.059060 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:38:53.062960 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:38:53.063096 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:38:53.069901 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:38:53.069984 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:38:53.074340 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:38:53.074437 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:38:53.080069 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:38:53.085988 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:38:53.091545 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:38:53.091739 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:38:53.103377 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:38:53.103716 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:38:53.103898 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:38:53.111527 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:38:53.113750 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:38:53.114090 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:38:53.114141 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:38:53.122856 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:38:53.134877 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:38:53.135061 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:38:53.145930 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:38:53.146036 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:38:53.156148 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:38:53.156251 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:38:53.163909 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:38:53.164021 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:38:53.172251 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:38:53.181801 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:38:53.181922 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:38:53.188496 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:38:53.189143 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:38:53.196681 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:38:53.196818 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:38:53.203085 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:38:53.203143 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:38:53.211905 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:38:53.212006 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:38:53.218818 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:38:53.218923 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:38:53.222984 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:38:53.223057 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:38:53.233840 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:38:53.242886 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:38:53.243042 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:38:53.253808 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:38:53.253923 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:38:53.266171 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 13 00:38:53.266270 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:38:53.274358 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:38:53.274428 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:38:53.282895 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:38:53.283004 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:38:53.291864 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 00:38:53.291956 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 13 00:38:53.292001 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 00:38:53.292060 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:38:53.389793 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Mar 13 00:38:53.292681 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:38:53.292898 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:38:53.298309 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:38:53.298479 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:38:53.303260 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:38:53.308593 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:38:53.338512 systemd[1]: Switching root. Mar 13 00:38:53.415826 systemd-journald[192]: Journal stopped Mar 13 00:38:55.783138 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:38:55.783200 kernel: SELinux: policy capability open_perms=1 Mar 13 00:38:55.783230 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:38:55.783249 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:38:55.783268 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:38:55.783288 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:38:55.783310 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:38:55.783330 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:38:55.783353 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:38:55.783374 kernel: audit: type=1403 audit(1773362334.120:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:38:55.783397 systemd[1]: Successfully loaded SELinux policy in 71.054ms. Mar 13 00:38:55.783421 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.723ms. Mar 13 00:38:55.783447 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:38:55.783468 systemd[1]: Detected virtualization google. Mar 13 00:38:55.783495 systemd[1]: Detected architecture x86-64. Mar 13 00:38:55.783517 systemd[1]: Detected first boot. Mar 13 00:38:55.783540 systemd[1]: Initializing machine ID from random generator. Mar 13 00:38:55.783562 zram_generator::config[1072]: No configuration found. Mar 13 00:38:55.783585 kernel: Guest personality initialized and is inactive Mar 13 00:38:55.783606 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 13 00:38:55.783669 kernel: Initialized host personality Mar 13 00:38:55.783692 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:38:55.783714 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:38:55.783738 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:38:55.783759 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:38:55.783789 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:38:55.783811 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:38:55.783838 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:38:55.783860 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:38:55.783883 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:38:55.783906 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:38:55.783929 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:38:55.783952 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:38:55.783987 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:38:55.784013 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:38:55.784035 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:38:55.784058 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:38:55.784081 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:38:55.784103 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:38:55.784126 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:38:55.784155 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:38:55.784179 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:38:55.784202 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:38:55.784229 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:38:55.784252 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:38:55.784275 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:38:55.784298 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:38:55.784321 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:38:55.784344 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:38:55.784367 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:38:55.784393 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:38:55.784416 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:38:55.784439 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:38:55.784462 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:38:55.784486 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:38:55.784509 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:38:55.784535 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:38:55.784559 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:38:55.784582 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:38:55.784605 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:38:55.786695 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:38:55.786740 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:38:55.786765 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:38:55.786795 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:38:55.786819 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:38:55.786840 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:38:55.786863 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:38:55.786899 systemd[1]: Reached target machines.target - Containers. Mar 13 00:38:55.786922 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:38:55.786943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:38:55.786962 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:38:55.786990 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:38:55.787012 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:38:55.787035 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:38:55.787059 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:38:55.787084 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:38:55.787107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:38:55.787129 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:38:55.787151 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:38:55.787174 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:38:55.787201 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:38:55.787224 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:38:55.787247 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:38:55.787270 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:38:55.787292 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:38:55.787313 kernel: fuse: init (API version 7.41) Mar 13 00:38:55.787335 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:38:55.787357 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:38:55.787383 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:38:55.787406 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:38:55.787428 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:38:55.787450 systemd[1]: Stopped verity-setup.service. Mar 13 00:38:55.787474 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:38:55.787496 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:38:55.787534 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:38:55.787557 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:38:55.787585 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:38:55.787615 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:38:55.787674 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:38:55.787697 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:38:55.787719 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:38:55.787742 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:38:55.787764 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:38:55.787786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:38:55.787810 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:38:55.787837 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:38:55.787859 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:38:55.787881 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:38:55.787904 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:38:55.787927 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:38:55.787950 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:38:55.788058 systemd-journald[1143]: Collecting audit messages is disabled. Mar 13 00:38:55.788117 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:38:55.788138 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:38:55.788160 kernel: loop: module loaded Mar 13 00:38:55.788181 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:38:55.788204 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:38:55.788232 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:38:55.788264 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:38:55.788291 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:38:55.788316 systemd-journald[1143]: Journal started Mar 13 00:38:55.788360 systemd-journald[1143]: Runtime Journal (/run/log/journal/cc75624206504bbe8f15d152ef11f918) is 8M, max 148.6M, 140.6M free. Mar 13 00:38:55.123842 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:38:55.149592 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 13 00:38:55.150201 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:38:55.796647 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:38:55.802664 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:38:55.808650 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:38:55.817653 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:38:55.827654 kernel: ACPI: bus type drm_connector registered Mar 13 00:38:55.832826 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:38:55.840060 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:38:55.857147 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:38:55.857238 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:38:55.871033 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:38:55.875640 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:38:55.879905 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:38:55.883265 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:38:55.884775 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:38:55.888339 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:38:55.893515 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:38:55.897313 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:38:55.898779 kernel: loop0: detected capacity change from 0 to 128560 Mar 13 00:38:55.906051 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:38:55.942878 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:38:55.989712 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:38:55.996846 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:38:56.002413 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:38:56.004383 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:38:56.006069 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:38:56.027080 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:38:56.065707 kernel: loop1: detected capacity change from 0 to 50736 Mar 13 00:38:56.078240 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Mar 13 00:38:56.079877 systemd-journald[1143]: Time spent on flushing to /var/log/journal/cc75624206504bbe8f15d152ef11f918 is 54.681ms for 976 entries. Mar 13 00:38:56.079877 systemd-journald[1143]: System Journal (/var/log/journal/cc75624206504bbe8f15d152ef11f918) is 8M, max 584.8M, 576.8M free. Mar 13 00:38:56.169401 systemd-journald[1143]: Received client request to flush runtime journal. Mar 13 00:38:56.078274 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Mar 13 00:38:56.096407 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:38:56.177429 kernel: loop2: detected capacity change from 0 to 110984 Mar 13 00:38:56.109235 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:38:56.121878 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:38:56.154010 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:38:56.177233 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:38:56.238220 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:38:56.247523 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:38:56.252884 kernel: loop3: detected capacity change from 0 to 219192 Mar 13 00:38:56.289833 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Mar 13 00:38:56.289869 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Mar 13 00:38:56.307555 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:38:56.390707 kernel: loop4: detected capacity change from 0 to 128560 Mar 13 00:38:56.441691 kernel: loop5: detected capacity change from 0 to 50736 Mar 13 00:38:56.481765 kernel: loop6: detected capacity change from 0 to 110984 Mar 13 00:38:56.526748 kernel: loop7: detected capacity change from 0 to 219192 Mar 13 00:38:56.588530 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Mar 13 00:38:56.589953 (sd-merge)[1224]: Merged extensions into '/usr'. Mar 13 00:38:56.599584 systemd[1]: Reload requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:38:56.599818 systemd[1]: Reloading... Mar 13 00:38:56.852667 zram_generator::config[1250]: No configuration found. Mar 13 00:38:57.097249 ldconfig[1171]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:38:57.319940 systemd[1]: Reloading finished in 718 ms. Mar 13 00:38:57.351420 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:38:57.355259 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:38:57.359202 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:38:57.378342 systemd[1]: Starting ensure-sysext.service... Mar 13 00:38:57.388466 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:38:57.402827 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:38:57.431772 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:38:57.431828 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:38:57.432296 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:38:57.432838 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:38:57.434530 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:38:57.435132 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Mar 13 00:38:57.435240 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Mar 13 00:38:57.436313 systemd[1]: Reload requested from client PID 1291 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:38:57.436451 systemd[1]: Reloading... Mar 13 00:38:57.442801 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:38:57.442821 systemd-tmpfiles[1292]: Skipping /boot Mar 13 00:38:57.459788 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:38:57.459814 systemd-tmpfiles[1292]: Skipping /boot Mar 13 00:38:57.521124 systemd-udevd[1293]: Using default interface naming scheme 'v255'. Mar 13 00:38:57.591246 zram_generator::config[1328]: No configuration found. Mar 13 00:38:58.106667 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:38:58.110619 systemd[1]: Reloading finished in 673 ms. Mar 13 00:38:58.124246 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:38:58.140661 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 13 00:38:58.155423 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:38:58.180782 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Mar 13 00:38:58.188066 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:38:58.207951 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Mar 13 00:38:58.220674 kernel: ACPI: button: Power Button [PWRF] Mar 13 00:38:58.223162 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:38:58.227341 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:38:58.239741 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Mar 13 00:38:58.252013 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:38:58.261663 kernel: ACPI: button: Sleep Button [SLPF] Mar 13 00:38:58.267082 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:38:58.271752 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:38:58.286805 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:38:58.300122 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:38:58.314115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:38:58.332649 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 13 00:38:58.344163 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 13 00:38:58.352218 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:38:58.352772 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:38:58.358517 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:38:58.373993 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:38:58.390005 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:38:58.399209 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:38:58.439909 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:38:58.449802 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:38:58.457934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:38:58.460035 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:38:58.472671 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:38:58.474019 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:38:58.483679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:38:58.484722 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:38:58.496527 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:38:58.496871 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:38:58.505563 kernel: EDAC MC: Ver: 3.0.0 Mar 13 00:38:58.525966 systemd[1]: Finished ensure-sysext.service. Mar 13 00:38:58.566818 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 13 00:38:58.574374 augenrules[1449]: No rules Mar 13 00:38:58.581688 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:38:58.582051 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:38:58.601516 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:38:58.612703 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:38:58.629375 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:38:58.646343 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Mar 13 00:38:58.655837 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:38:58.655957 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:38:58.658849 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:38:58.676994 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:38:58.687826 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:38:58.719279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:38:58.749234 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:38:58.769921 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 13 00:38:58.785587 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Mar 13 00:38:58.806327 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:38:58.867779 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:38:58.886676 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:38:58.937720 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:38:58.998111 systemd-networkd[1425]: lo: Link UP Mar 13 00:38:58.998154 systemd-networkd[1425]: lo: Gained carrier Mar 13 00:38:59.001144 systemd-networkd[1425]: Enumeration completed Mar 13 00:38:59.001824 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:38:59.002503 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:38:59.002641 systemd-networkd[1425]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:38:59.003329 systemd-networkd[1425]: eth0: Link UP Mar 13 00:38:59.003857 systemd-networkd[1425]: eth0: Gained carrier Mar 13 00:38:59.003887 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:38:59.013118 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:38:59.015533 systemd-networkd[1425]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136.c.flatcar-212911.internal' to 'ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136' Mar 13 00:38:59.015577 systemd-networkd[1425]: eth0: DHCPv4 address 10.128.0.75/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 13 00:38:59.029459 systemd-resolved[1427]: Positive Trust Anchors: Mar 13 00:38:59.029484 systemd-resolved[1427]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:38:59.029550 systemd-resolved[1427]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:38:59.029943 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:38:59.039890 systemd-resolved[1427]: Defaulting to hostname 'linux'. Mar 13 00:38:59.043899 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:38:59.053030 systemd[1]: Reached target network.target - Network. Mar 13 00:38:59.060824 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:38:59.071879 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:38:59.081980 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:38:59.092988 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:38:59.104015 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:38:59.115268 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:38:59.125052 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:38:59.135854 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:38:59.146860 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:38:59.146940 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:38:59.154845 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:38:59.164584 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:38:59.176131 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:38:59.187032 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:38:59.198109 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:38:59.208830 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:38:59.228756 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:38:59.238529 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:38:59.250189 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:38:59.261148 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:38:59.271604 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:38:59.280858 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:38:59.288980 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:38:59.289039 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:38:59.291275 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:38:59.308919 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 13 00:38:59.331357 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:38:59.350851 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:38:59.368773 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:38:59.380987 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:38:59.388776 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:38:59.391332 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:38:59.395156 jq[1500]: false Mar 13 00:38:59.409203 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:38:59.421402 systemd[1]: Started ntpd.service - Network Time Service. Mar 13 00:38:59.432476 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:38:59.434868 coreos-metadata[1497]: Mar 13 00:38:59.434 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Mar 13 00:38:59.440800 coreos-metadata[1497]: Mar 13 00:38:59.440 INFO Fetch successful Mar 13 00:38:59.440917 coreos-metadata[1497]: Mar 13 00:38:59.440 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Mar 13 00:38:59.441177 extend-filesystems[1503]: Found /dev/sda6 Mar 13 00:38:59.452956 coreos-metadata[1497]: Mar 13 00:38:59.441 INFO Fetch successful Mar 13 00:38:59.452956 coreos-metadata[1497]: Mar 13 00:38:59.441 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Mar 13 00:38:59.452956 coreos-metadata[1497]: Mar 13 00:38:59.444 INFO Fetch successful Mar 13 00:38:59.452956 coreos-metadata[1497]: Mar 13 00:38:59.444 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Mar 13 00:38:59.452956 coreos-metadata[1497]: Mar 13 00:38:59.445 INFO Fetch successful Mar 13 00:38:59.448539 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:38:59.443886 oslogin_cache_refresh[1504]: Refreshing passwd entry cache Mar 13 00:38:59.453850 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Refreshing passwd entry cache Mar 13 00:38:59.454246 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Failure getting users, quitting Mar 13 00:38:59.454246 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:38:59.454246 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Refreshing group entry cache Mar 13 00:38:59.454072 oslogin_cache_refresh[1504]: Failure getting users, quitting Mar 13 00:38:59.454099 oslogin_cache_refresh[1504]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:38:59.454168 oslogin_cache_refresh[1504]: Refreshing group entry cache Mar 13 00:38:59.463122 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Failure getting groups, quitting Mar 13 00:38:59.463203 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:38:59.463132 oslogin_cache_refresh[1504]: Failure getting groups, quitting Mar 13 00:38:59.463153 oslogin_cache_refresh[1504]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:38:59.466015 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:38:59.466697 extend-filesystems[1503]: Found /dev/sda9 Mar 13 00:38:59.480826 extend-filesystems[1503]: Checking size of /dev/sda9 Mar 13 00:38:59.485494 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:38:59.500046 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Mar 13 00:38:59.503721 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:38:59.504779 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:38:59.517116 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:38:59.527183 extend-filesystems[1503]: Resized partition /dev/sda9 Mar 13 00:38:59.534108 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:38:59.548370 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:38:59.549144 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:38:59.550752 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:38:59.551381 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:38:59.552314 ntpd[1506]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:38:59.553245 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:38:59.553245 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:38:59.553245 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: ---------------------------------------------------- Mar 13 00:38:59.553245 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:38:59.553245 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:38:59.553245 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: corporation. Support and training for ntp-4 are Mar 13 00:38:59.553245 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: available at https://www.nwtime.org/support Mar 13 00:38:59.553245 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: ---------------------------------------------------- Mar 13 00:38:59.552401 ntpd[1506]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:38:59.552417 ntpd[1506]: ---------------------------------------------------- Mar 13 00:38:59.552432 ntpd[1506]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:38:59.552446 ntpd[1506]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:38:59.552459 ntpd[1506]: corporation. Support and training for ntp-4 are Mar 13 00:38:59.552472 ntpd[1506]: available at https://www.nwtime.org/support Mar 13 00:38:59.552486 ntpd[1506]: ---------------------------------------------------- Mar 13 00:38:59.554480 extend-filesystems[1534]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 00:38:59.592129 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Mar 13 00:38:59.592238 kernel: ntpd[1506]: segfault at 24 ip 000055facf34faeb sp 00007ffc2ff502c0 error 4 in ntpd[68aeb,55facf2ed000+80000] likely on CPU 0 (core 0, socket 0) Mar 13 00:38:59.592278 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Mar 13 00:38:59.562428 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:38:59.592414 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: proto: precision = 0.069 usec (-24) Mar 13 00:38:59.592414 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: basedate set to 2026-02-28 Mar 13 00:38:59.592414 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: gps base set to 2026-03-01 (week 2408) Mar 13 00:38:59.592414 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:38:59.592414 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:38:59.592414 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:38:59.592414 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: Listen normally on 3 eth0 10.128.0.75:123 Mar 13 00:38:59.592414 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: Listen normally on 4 lo [::1]:123 Mar 13 00:38:59.592414 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: bind(21) AF_INET6 [fe80::4001:aff:fe80:4b%2]:123 flags 0x811 failed: Cannot assign requested address Mar 13 00:38:59.592414 ntpd[1506]: 13 Mar 00:38:59 ntpd[1506]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:4b%2]:123 Mar 13 00:38:59.566740 ntpd[1506]: proto: precision = 0.069 usec (-24) Mar 13 00:38:59.563759 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:38:59.569937 ntpd[1506]: basedate set to 2026-02-28 Mar 13 00:38:59.569966 ntpd[1506]: gps base set to 2026-03-01 (week 2408) Mar 13 00:38:59.570130 ntpd[1506]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:38:59.570169 ntpd[1506]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:38:59.570422 ntpd[1506]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:38:59.570461 ntpd[1506]: Listen normally on 3 eth0 10.128.0.75:123 Mar 13 00:38:59.570501 ntpd[1506]: Listen normally on 4 lo [::1]:123 Mar 13 00:38:59.570544 ntpd[1506]: bind(21) AF_INET6 [fe80::4001:aff:fe80:4b%2]:123 flags 0x811 failed: Cannot assign requested address Mar 13 00:38:59.570574 ntpd[1506]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:4b%2]:123 Mar 13 00:38:59.615510 jq[1528]: true Mar 13 00:38:59.628385 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:38:59.631004 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:38:59.690394 systemd-coredump[1545]: Process 1506 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 13 00:38:59.714155 update_engine[1524]: I20260313 00:38:59.714018 1524 main.cc:92] Flatcar Update Engine starting Mar 13 00:38:59.721263 (ntainerd)[1541]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:38:59.726798 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Mar 13 00:38:59.750283 systemd[1]: Started systemd-coredump@0-1545-0.service - Process Core Dump (PID 1545/UID 0). Mar 13 00:38:59.770992 jq[1540]: true Mar 13 00:38:59.802726 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Mar 13 00:38:59.812189 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 13 00:38:59.822200 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:38:59.855687 tar[1538]: linux-amd64/LICENSE Mar 13 00:38:59.855687 tar[1538]: linux-amd64/helm Mar 13 00:38:59.864565 extend-filesystems[1534]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 13 00:38:59.864565 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 13 00:38:59.864565 extend-filesystems[1534]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Mar 13 00:38:59.896871 extend-filesystems[1503]: Resized filesystem in /dev/sda9 Mar 13 00:38:59.929109 systemd-logind[1519]: Watching system buttons on /dev/input/event2 (Power Button) Mar 13 00:38:59.929152 systemd-logind[1519]: Watching system buttons on /dev/input/event3 (Sleep Button) Mar 13 00:38:59.929184 systemd-logind[1519]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:38:59.939523 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:38:59.953243 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:38:59.953536 systemd-logind[1519]: New seat seat0. Mar 13 00:39:00.068550 dbus-daemon[1498]: [system] SELinux support is enabled Mar 13 00:39:00.069662 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:39:00.078948 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:39:00.096057 bash[1576]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:39:00.098333 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:39:00.104895 dbus-daemon[1498]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1425 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 13 00:39:00.105128 update_engine[1524]: I20260313 00:39:00.105047 1524 update_check_scheduler.cc:74] Next update check in 7m18s Mar 13 00:39:00.167386 dbus-daemon[1498]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 13 00:39:00.195937 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:39:00.212261 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:39:00.228081 systemd[1]: Starting sshkeys.service... Mar 13 00:39:00.234851 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:39:00.235159 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:39:00.252857 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 13 00:39:00.261809 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:39:00.262064 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:39:00.296079 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:39:00.302386 systemd-coredump[1556]: Process 1506 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1506: #0 0x000055facf34faeb n/a (ntpd + 0x68aeb) #1 0x000055facf2f8cdf n/a (ntpd + 0x11cdf) #2 0x000055facf2f9575 n/a (ntpd + 0x12575) #3 0x000055facf2f4d8a n/a (ntpd + 0xdd8a) #4 0x000055facf2f65d3 n/a (ntpd + 0xf5d3) #5 0x000055facf2fefd1 n/a (ntpd + 0x17fd1) #6 0x000055facf2efc2d n/a (ntpd + 0x8c2d) #7 0x00007fc2f0e7316c n/a (libc.so.6 + 0x2716c) #8 0x00007fc2f0e73229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055facf2efc55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Mar 13 00:39:00.322616 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 13 00:39:00.325381 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 13 00:39:00.355008 systemd[1]: systemd-coredump@0-1545-0.service: Deactivated successfully. Mar 13 00:39:00.378491 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 13 00:39:00.394394 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 13 00:39:00.485233 coreos-metadata[1596]: Mar 13 00:39:00.484 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Mar 13 00:39:00.485815 coreos-metadata[1596]: Mar 13 00:39:00.485 INFO Fetch failed with 404: resource not found Mar 13 00:39:00.486020 systemd-networkd[1425]: eth0: Gained IPv6LL Mar 13 00:39:00.486920 coreos-metadata[1596]: Mar 13 00:39:00.485 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Mar 13 00:39:00.489240 coreos-metadata[1596]: Mar 13 00:39:00.487 INFO Fetch successful Mar 13 00:39:00.489240 coreos-metadata[1596]: Mar 13 00:39:00.487 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Mar 13 00:39:00.489240 coreos-metadata[1596]: Mar 13 00:39:00.488 INFO Fetch failed with 404: resource not found Mar 13 00:39:00.489240 coreos-metadata[1596]: Mar 13 00:39:00.488 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Mar 13 00:39:00.491241 coreos-metadata[1596]: Mar 13 00:39:00.489 INFO Fetch failed with 404: resource not found Mar 13 00:39:00.491241 coreos-metadata[1596]: Mar 13 00:39:00.489 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Mar 13 00:39:00.491241 coreos-metadata[1596]: Mar 13 00:39:00.490 INFO Fetch successful Mar 13 00:39:00.491060 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Mar 13 00:39:00.493662 unknown[1596]: wrote ssh authorized keys file for user: core Mar 13 00:39:00.502898 systemd[1]: Started ntpd.service - Network Time Service. Mar 13 00:39:00.513265 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:39:00.545834 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:39:00.561167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:39:00.580138 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:39:00.599511 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Mar 13 00:39:00.650943 init.sh[1605]: + '[' -e /etc/default/instance_configs.cfg.template ']' Mar 13 00:39:00.650943 init.sh[1605]: + echo -e '[InstanceSetup]\nset_host_keys = false' Mar 13 00:39:00.650943 init.sh[1605]: + /usr/bin/google_instance_setup Mar 13 00:39:00.662089 update-ssh-keys[1600]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:39:00.661347 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 13 00:39:00.683274 systemd[1]: Finished sshkeys.service. Mar 13 00:39:00.752234 ntpd[1599]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:39:00.757660 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:39:00.757660 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:39:00.757660 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: ---------------------------------------------------- Mar 13 00:39:00.757660 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:39:00.757660 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:39:00.757660 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: corporation. Support and training for ntp-4 are Mar 13 00:39:00.757660 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: available at https://www.nwtime.org/support Mar 13 00:39:00.757660 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: ---------------------------------------------------- Mar 13 00:39:00.756727 ntpd[1599]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:39:00.756747 ntpd[1599]: ---------------------------------------------------- Mar 13 00:39:00.756761 ntpd[1599]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:39:00.756775 ntpd[1599]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:39:00.756789 ntpd[1599]: corporation. Support and training for ntp-4 are Mar 13 00:39:00.756804 ntpd[1599]: available at https://www.nwtime.org/support Mar 13 00:39:00.756818 ntpd[1599]: ---------------------------------------------------- Mar 13 00:39:00.761013 ntpd[1599]: proto: precision = 0.084 usec (-23) Mar 13 00:39:00.763786 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: proto: precision = 0.084 usec (-23) Mar 13 00:39:00.763786 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: basedate set to 2026-02-28 Mar 13 00:39:00.763786 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: gps base set to 2026-03-01 (week 2408) Mar 13 00:39:00.763786 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:39:00.763786 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:39:00.761354 ntpd[1599]: basedate set to 2026-02-28 Mar 13 00:39:00.761372 ntpd[1599]: gps base set to 2026-03-01 (week 2408) Mar 13 00:39:00.761496 ntpd[1599]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:39:00.761537 ntpd[1599]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:39:00.770906 ntpd[1599]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:39:00.775673 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:39:00.775673 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: Listen normally on 3 eth0 10.128.0.75:123 Mar 13 00:39:00.775673 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: Listen normally on 4 lo [::1]:123 Mar 13 00:39:00.775673 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:4b%2]:123 Mar 13 00:39:00.775673 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: Listening on routing socket on fd #22 for interface updates Mar 13 00:39:00.770974 ntpd[1599]: Listen normally on 3 eth0 10.128.0.75:123 Mar 13 00:39:00.771021 ntpd[1599]: Listen normally on 4 lo [::1]:123 Mar 13 00:39:00.771061 ntpd[1599]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:4b%2]:123 Mar 13 00:39:00.771101 ntpd[1599]: Listening on routing socket on fd #22 for interface updates Mar 13 00:39:00.797724 ntpd[1599]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 13 00:39:00.799210 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 13 00:39:00.799210 ntpd[1599]: 13 Mar 00:39:00 ntpd[1599]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 13 00:39:00.797779 ntpd[1599]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 13 00:39:00.859134 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 13 00:39:00.860335 dbus-daemon[1498]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 13 00:39:00.861185 dbus-daemon[1498]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1585 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 13 00:39:00.887015 systemd[1]: Starting polkit.service - Authorization Manager... Mar 13 00:39:00.895428 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:39:00.987868 containerd[1541]: time="2026-03-13T00:39:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:39:01.001142 containerd[1541]: time="2026-03-13T00:39:00.998802570Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:39:01.081392 containerd[1541]: time="2026-03-13T00:39:01.081333813Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.539µs" Mar 13 00:39:01.084485 containerd[1541]: time="2026-03-13T00:39:01.082766347Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:39:01.084485 containerd[1541]: time="2026-03-13T00:39:01.082825977Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:39:01.084485 containerd[1541]: time="2026-03-13T00:39:01.083063015Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:39:01.084485 containerd[1541]: time="2026-03-13T00:39:01.083090040Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:39:01.084485 containerd[1541]: time="2026-03-13T00:39:01.083129758Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:39:01.084485 containerd[1541]: time="2026-03-13T00:39:01.083223019Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:39:01.084485 containerd[1541]: time="2026-03-13T00:39:01.083244223Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:39:01.084485 containerd[1541]: time="2026-03-13T00:39:01.083543775Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:39:01.084485 containerd[1541]: time="2026-03-13T00:39:01.083568551Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:39:01.084485 containerd[1541]: time="2026-03-13T00:39:01.083587532Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:39:01.084485 containerd[1541]: time="2026-03-13T00:39:01.083601646Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:39:01.085824 containerd[1541]: time="2026-03-13T00:39:01.085788128Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:39:01.087378 containerd[1541]: time="2026-03-13T00:39:01.087185134Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:39:01.087378 containerd[1541]: time="2026-03-13T00:39:01.087251878Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:39:01.087378 containerd[1541]: time="2026-03-13T00:39:01.087270759Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:39:01.088084 containerd[1541]: time="2026-03-13T00:39:01.087999403Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:39:01.090275 containerd[1541]: time="2026-03-13T00:39:01.089784303Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:39:01.090275 containerd[1541]: time="2026-03-13T00:39:01.089902182Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:39:01.099578 containerd[1541]: time="2026-03-13T00:39:01.099414115Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:39:01.101972 containerd[1541]: time="2026-03-13T00:39:01.100131945Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:39:01.101972 containerd[1541]: time="2026-03-13T00:39:01.100179975Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:39:01.101972 containerd[1541]: time="2026-03-13T00:39:01.100200290Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:39:01.101972 containerd[1541]: time="2026-03-13T00:39:01.100220258Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:39:01.101972 containerd[1541]: time="2026-03-13T00:39:01.100239126Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:39:01.101972 containerd[1541]: time="2026-03-13T00:39:01.100260277Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:39:01.101972 containerd[1541]: time="2026-03-13T00:39:01.100426868Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:39:01.101972 containerd[1541]: time="2026-03-13T00:39:01.100493906Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:39:01.101972 containerd[1541]: time="2026-03-13T00:39:01.100516541Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:39:01.101972 containerd[1541]: time="2026-03-13T00:39:01.100535383Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:39:01.101972 containerd[1541]: time="2026-03-13T00:39:01.100559668Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:39:01.101972 containerd[1541]: time="2026-03-13T00:39:01.100890124Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:39:01.101972 containerd[1541]: time="2026-03-13T00:39:01.100930748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:39:01.101972 containerd[1541]: time="2026-03-13T00:39:01.100973895Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:39:01.102748 containerd[1541]: time="2026-03-13T00:39:01.100997814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:39:01.102748 containerd[1541]: time="2026-03-13T00:39:01.101018461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:39:01.102748 containerd[1541]: time="2026-03-13T00:39:01.101040590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:39:01.102748 containerd[1541]: time="2026-03-13T00:39:01.101063383Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:39:01.102748 containerd[1541]: time="2026-03-13T00:39:01.101097740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:39:01.102748 containerd[1541]: time="2026-03-13T00:39:01.101121407Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:39:01.102748 containerd[1541]: time="2026-03-13T00:39:01.101141290Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:39:01.102748 containerd[1541]: time="2026-03-13T00:39:01.101166731Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:39:01.102748 containerd[1541]: time="2026-03-13T00:39:01.101268080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:39:01.102748 containerd[1541]: time="2026-03-13T00:39:01.101293562Z" level=info msg="Start snapshots syncer" Mar 13 00:39:01.105934 containerd[1541]: time="2026-03-13T00:39:01.104830554Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:39:01.110656 containerd[1541]: time="2026-03-13T00:39:01.107010445Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:39:01.110656 containerd[1541]: time="2026-03-13T00:39:01.107125036Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:39:01.110997 containerd[1541]: time="2026-03-13T00:39:01.109888598Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:39:01.110997 containerd[1541]: time="2026-03-13T00:39:01.110130600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:39:01.110997 containerd[1541]: time="2026-03-13T00:39:01.110174727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:39:01.110997 containerd[1541]: time="2026-03-13T00:39:01.110196232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:39:01.110997 containerd[1541]: time="2026-03-13T00:39:01.110225206Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:39:01.110997 containerd[1541]: time="2026-03-13T00:39:01.110250314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:39:01.110997 containerd[1541]: time="2026-03-13T00:39:01.110279189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:39:01.110997 containerd[1541]: time="2026-03-13T00:39:01.110306020Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:39:01.110997 containerd[1541]: time="2026-03-13T00:39:01.110351944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:39:01.110997 containerd[1541]: time="2026-03-13T00:39:01.110370967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:39:01.110997 containerd[1541]: time="2026-03-13T00:39:01.110393100Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:39:01.110997 containerd[1541]: time="2026-03-13T00:39:01.110432536Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:39:01.110997 containerd[1541]: time="2026-03-13T00:39:01.110459368Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:39:01.110997 containerd[1541]: time="2026-03-13T00:39:01.110475432Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:39:01.111833 containerd[1541]: time="2026-03-13T00:39:01.110492640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:39:01.111833 containerd[1541]: time="2026-03-13T00:39:01.110509167Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:39:01.111833 containerd[1541]: time="2026-03-13T00:39:01.110525569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:39:01.111833 containerd[1541]: time="2026-03-13T00:39:01.110558255Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:39:01.111833 containerd[1541]: time="2026-03-13T00:39:01.110596526Z" level=info msg="runtime interface created" Mar 13 00:39:01.111833 containerd[1541]: time="2026-03-13T00:39:01.110606420Z" level=info msg="created NRI interface" Mar 13 00:39:01.115652 containerd[1541]: time="2026-03-13T00:39:01.113956062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:39:01.115652 containerd[1541]: time="2026-03-13T00:39:01.114021316Z" level=info msg="Connect containerd service" Mar 13 00:39:01.115652 containerd[1541]: time="2026-03-13T00:39:01.114068992Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:39:01.120873 containerd[1541]: time="2026-03-13T00:39:01.120429529Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:39:01.129085 locksmithd[1587]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:39:01.450865 polkitd[1621]: Started polkitd version 126 Mar 13 00:39:01.497544 polkitd[1621]: Loading rules from directory /etc/polkit-1/rules.d Mar 13 00:39:01.510502 polkitd[1621]: Loading rules from directory /run/polkit-1/rules.d Mar 13 00:39:01.510610 polkitd[1621]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 13 00:39:01.511445 polkitd[1621]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 13 00:39:01.511492 polkitd[1621]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 13 00:39:01.511557 polkitd[1621]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 13 00:39:01.518074 polkitd[1621]: Finished loading, compiling and executing 2 rules Mar 13 00:39:01.520530 systemd[1]: Started polkit.service - Authorization Manager. Mar 13 00:39:01.527132 dbus-daemon[1498]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 13 00:39:01.528502 polkitd[1621]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 13 00:39:01.538720 containerd[1541]: time="2026-03-13T00:39:01.529606768Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:39:01.538720 containerd[1541]: time="2026-03-13T00:39:01.533387555Z" level=info msg="Start subscribing containerd event" Mar 13 00:39:01.538720 containerd[1541]: time="2026-03-13T00:39:01.535588876Z" level=info msg="Start recovering state" Mar 13 00:39:01.538720 containerd[1541]: time="2026-03-13T00:39:01.535767836Z" level=info msg="Start event monitor" Mar 13 00:39:01.538720 containerd[1541]: time="2026-03-13T00:39:01.535786527Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:39:01.538720 containerd[1541]: time="2026-03-13T00:39:01.535806519Z" level=info msg="Start streaming server" Mar 13 00:39:01.538720 containerd[1541]: time="2026-03-13T00:39:01.535820816Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:39:01.538720 containerd[1541]: time="2026-03-13T00:39:01.535833281Z" level=info msg="runtime interface starting up..." Mar 13 00:39:01.538720 containerd[1541]: time="2026-03-13T00:39:01.535842969Z" level=info msg="starting plugins..." Mar 13 00:39:01.538720 containerd[1541]: time="2026-03-13T00:39:01.535861523Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:39:01.539620 containerd[1541]: time="2026-03-13T00:39:01.539310927Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:39:01.543105 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:39:01.546764 containerd[1541]: time="2026-03-13T00:39:01.546719699Z" level=info msg="containerd successfully booted in 0.569553s" Mar 13 00:39:01.618391 systemd-hostnamed[1585]: Hostname set to (transient) Mar 13 00:39:01.619907 systemd-resolved[1427]: System hostname changed to 'ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136'. Mar 13 00:39:02.031940 tar[1538]: linux-amd64/README.md Mar 13 00:39:02.071263 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:39:02.120900 sshd_keygen[1539]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:39:02.162030 instance-setup[1608]: INFO Running google_set_multiqueue. Mar 13 00:39:02.173335 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:39:02.191303 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:39:02.202810 systemd[1]: Started sshd@0-10.128.0.75:22-20.161.92.111:34074.service - OpenSSH per-connection server daemon (20.161.92.111:34074). Mar 13 00:39:02.205083 instance-setup[1608]: INFO Set channels for eth0 to 2. Mar 13 00:39:02.221011 instance-setup[1608]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Mar 13 00:39:02.226769 instance-setup[1608]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Mar 13 00:39:02.227451 instance-setup[1608]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Mar 13 00:39:02.229111 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:39:02.230450 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:39:02.232917 instance-setup[1608]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Mar 13 00:39:02.233103 instance-setup[1608]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Mar 13 00:39:02.238581 instance-setup[1608]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Mar 13 00:39:02.239229 instance-setup[1608]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Mar 13 00:39:02.248172 instance-setup[1608]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Mar 13 00:39:02.248896 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:39:02.275711 instance-setup[1608]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Mar 13 00:39:02.294363 instance-setup[1608]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Mar 13 00:39:02.302439 instance-setup[1608]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Mar 13 00:39:02.302503 instance-setup[1608]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Mar 13 00:39:02.315330 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:39:02.328182 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:39:02.340700 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:39:02.351307 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:39:02.353849 init.sh[1605]: + /usr/bin/google_metadata_script_runner --script-type startup Mar 13 00:39:02.569066 startup-script[1699]: INFO Starting startup scripts. Mar 13 00:39:02.574721 sshd[1675]: Accepted publickey for core from 20.161.92.111 port 34074 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:39:02.579120 startup-script[1699]: INFO No startup scripts found in metadata. Mar 13 00:39:02.579213 startup-script[1699]: INFO Finished running startup scripts. Mar 13 00:39:02.579616 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:02.598554 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:39:02.611151 init.sh[1605]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Mar 13 00:39:02.611151 init.sh[1605]: + daemon_pids=() Mar 13 00:39:02.611151 init.sh[1605]: + for d in accounts clock_skew network Mar 13 00:39:02.611236 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:39:02.614677 init.sh[1605]: + daemon_pids+=($!) Mar 13 00:39:02.614677 init.sh[1605]: + for d in accounts clock_skew network Mar 13 00:39:02.614958 init.sh[1605]: + daemon_pids+=($!) Mar 13 00:39:02.615006 init.sh[1704]: + /usr/bin/google_clock_skew_daemon Mar 13 00:39:02.617658 init.sh[1605]: + for d in accounts clock_skew network Mar 13 00:39:02.617658 init.sh[1605]: + daemon_pids+=($!) Mar 13 00:39:02.617658 init.sh[1605]: + NOTIFY_SOCKET=/run/systemd/notify Mar 13 00:39:02.617658 init.sh[1605]: + /usr/bin/systemd-notify --ready Mar 13 00:39:02.618015 init.sh[1703]: + /usr/bin/google_accounts_daemon Mar 13 00:39:02.619942 init.sh[1705]: + /usr/bin/google_network_daemon Mar 13 00:39:02.649753 systemd[1]: Started oem-gce.service - GCE Linux Agent. Mar 13 00:39:02.650846 systemd-logind[1519]: New session 1 of user core. Mar 13 00:39:02.670568 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:39:02.675298 init.sh[1605]: + wait -n 1703 1704 1705 Mar 13 00:39:02.690026 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:39:02.733663 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:39:02.740048 systemd-logind[1519]: New session c1 of user core. Mar 13 00:39:03.173533 systemd[1708]: Queued start job for default target default.target. Mar 13 00:39:03.180566 systemd[1708]: Created slice app.slice - User Application Slice. Mar 13 00:39:03.181350 systemd[1708]: Reached target paths.target - Paths. Mar 13 00:39:03.181715 systemd[1708]: Reached target timers.target - Timers. Mar 13 00:39:03.186818 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:39:03.233107 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:39:03.234511 systemd[1708]: Reached target sockets.target - Sockets. Mar 13 00:39:03.234601 systemd[1708]: Reached target basic.target - Basic System. Mar 13 00:39:03.234701 systemd[1708]: Reached target default.target - Main User Target. Mar 13 00:39:03.234756 systemd[1708]: Startup finished in 474ms. Mar 13 00:39:03.235823 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:39:03.257935 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:39:03.271822 google-clock-skew[1704]: INFO Starting Google Clock Skew daemon. Mar 13 00:39:03.281534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:03.289894 google-clock-skew[1704]: INFO Clock drift token has changed: 0. Mar 13 00:39:03.293765 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:39:03.304005 systemd[1]: Startup finished in 4.214s (kernel) + 8.307s (initrd) + 9.250s (userspace) = 21.773s. Mar 13 00:39:03.310455 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:39:03.423297 systemd[1]: Started sshd@1-10.128.0.75:22-20.161.92.111:34078.service - OpenSSH per-connection server daemon (20.161.92.111:34078). Mar 13 00:39:03.436022 google-networking[1705]: INFO Starting Google Networking daemon. Mar 13 00:39:03.474510 groupadd[1735]: group added to /etc/group: name=google-sudoers, GID=1000 Mar 13 00:39:03.482209 groupadd[1735]: group added to /etc/gshadow: name=google-sudoers Mar 13 00:39:03.549362 groupadd[1735]: new group: name=google-sudoers, GID=1000 Mar 13 00:39:03.585107 google-accounts[1703]: INFO Starting Google Accounts daemon. Mar 13 00:39:03.602313 google-accounts[1703]: WARNING OS Login not installed. Mar 13 00:39:03.605140 google-accounts[1703]: INFO Creating a new user account for 0. Mar 13 00:39:03.615671 init.sh[1752]: useradd: invalid user name '0': use --badname to ignore Mar 13 00:39:03.616424 google-accounts[1703]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Mar 13 00:39:04.001008 google-clock-skew[1704]: INFO Synced system time with hardware clock. Mar 13 00:39:04.003870 systemd-resolved[1427]: Clock change detected. Flushing caches. Mar 13 00:39:04.074341 sshd[1737]: Accepted publickey for core from 20.161.92.111 port 34078 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:39:04.078246 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:04.087211 systemd-logind[1519]: New session 2 of user core. Mar 13 00:39:04.093036 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:39:04.192979 sshd[1754]: Connection closed by 20.161.92.111 port 34078 Mar 13 00:39:04.194148 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:04.204258 systemd[1]: sshd@1-10.128.0.75:22-20.161.92.111:34078.service: Deactivated successfully. Mar 13 00:39:04.208123 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 00:39:04.211332 systemd-logind[1519]: Session 2 logged out. Waiting for processes to exit. Mar 13 00:39:04.213695 systemd-logind[1519]: Removed session 2. Mar 13 00:39:04.241147 systemd[1]: Started sshd@2-10.128.0.75:22-20.161.92.111:34084.service - OpenSSH per-connection server daemon (20.161.92.111:34084). Mar 13 00:39:04.499379 sshd[1760]: Accepted publickey for core from 20.161.92.111 port 34084 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:39:04.502557 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:04.513506 systemd-logind[1519]: New session 3 of user core. Mar 13 00:39:04.519151 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:39:04.605135 sshd[1764]: Connection closed by 20.161.92.111 port 34084 Mar 13 00:39:04.606768 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:04.615450 systemd[1]: sshd@2-10.128.0.75:22-20.161.92.111:34084.service: Deactivated successfully. Mar 13 00:39:04.619399 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 00:39:04.622594 systemd-logind[1519]: Session 3 logged out. Waiting for processes to exit. Mar 13 00:39:04.625769 systemd-logind[1519]: Removed session 3. Mar 13 00:39:04.653028 systemd[1]: Started sshd@3-10.128.0.75:22-20.161.92.111:34100.service - OpenSSH per-connection server daemon (20.161.92.111:34100). Mar 13 00:39:04.709380 kubelet[1727]: E0313 00:39:04.709310 1727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:39:04.712455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:39:04.712711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:39:04.713303 systemd[1]: kubelet.service: Consumed 1.290s CPU time, 259M memory peak. Mar 13 00:39:04.900473 sshd[1770]: Accepted publickey for core from 20.161.92.111 port 34100 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:39:04.902338 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:04.909803 systemd-logind[1519]: New session 4 of user core. Mar 13 00:39:04.916062 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:39:05.010152 sshd[1774]: Connection closed by 20.161.92.111 port 34100 Mar 13 00:39:05.011067 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:05.016255 systemd[1]: sshd@3-10.128.0.75:22-20.161.92.111:34100.service: Deactivated successfully. Mar 13 00:39:05.019722 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:39:05.023282 systemd-logind[1519]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:39:05.025134 systemd-logind[1519]: Removed session 4. Mar 13 00:39:05.058229 systemd[1]: Started sshd@4-10.128.0.75:22-20.161.92.111:34104.service - OpenSSH per-connection server daemon (20.161.92.111:34104). Mar 13 00:39:05.292612 sshd[1780]: Accepted publickey for core from 20.161.92.111 port 34104 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:39:05.294497 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:05.300389 systemd-logind[1519]: New session 5 of user core. Mar 13 00:39:05.307992 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:39:05.390476 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:39:05.390977 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:39:05.405256 sudo[1784]: pam_unix(sudo:session): session closed for user root Mar 13 00:39:05.440588 sshd[1783]: Connection closed by 20.161.92.111 port 34104 Mar 13 00:39:05.443076 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:05.449052 systemd-logind[1519]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:39:05.449768 systemd[1]: sshd@4-10.128.0.75:22-20.161.92.111:34104.service: Deactivated successfully. Mar 13 00:39:05.452339 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:39:05.454801 systemd-logind[1519]: Removed session 5. Mar 13 00:39:05.489251 systemd[1]: Started sshd@5-10.128.0.75:22-20.161.92.111:34106.service - OpenSSH per-connection server daemon (20.161.92.111:34106). Mar 13 00:39:05.746773 sshd[1790]: Accepted publickey for core from 20.161.92.111 port 34106 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:39:05.748453 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:05.756864 systemd-logind[1519]: New session 6 of user core. Mar 13 00:39:05.762990 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:39:05.828862 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:39:05.829362 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:39:05.836354 sudo[1795]: pam_unix(sudo:session): session closed for user root Mar 13 00:39:05.850557 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:39:05.851056 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:39:05.864300 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:39:05.912053 augenrules[1817]: No rules Mar 13 00:39:05.913875 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:39:05.914514 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:39:05.915945 sudo[1794]: pam_unix(sudo:session): session closed for user root Mar 13 00:39:05.951598 sshd[1793]: Connection closed by 20.161.92.111 port 34106 Mar 13 00:39:05.953947 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:05.959508 systemd[1]: sshd@5-10.128.0.75:22-20.161.92.111:34106.service: Deactivated successfully. Mar 13 00:39:05.962146 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:39:05.963751 systemd-logind[1519]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:39:05.965537 systemd-logind[1519]: Removed session 6. Mar 13 00:39:06.003592 systemd[1]: Started sshd@6-10.128.0.75:22-20.161.92.111:34114.service - OpenSSH per-connection server daemon (20.161.92.111:34114). Mar 13 00:39:06.243761 sshd[1826]: Accepted publickey for core from 20.161.92.111 port 34114 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:39:06.245469 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:06.252824 systemd-logind[1519]: New session 7 of user core. Mar 13 00:39:06.262049 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:39:06.330053 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:39:06.330811 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:39:06.850202 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:39:06.870848 (dockerd)[1848]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:39:07.265619 dockerd[1848]: time="2026-03-13T00:39:07.265440577Z" level=info msg="Starting up" Mar 13 00:39:07.267786 dockerd[1848]: time="2026-03-13T00:39:07.267714095Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:39:07.288152 dockerd[1848]: time="2026-03-13T00:39:07.288058864Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:39:07.478575 dockerd[1848]: time="2026-03-13T00:39:07.478521432Z" level=info msg="Loading containers: start." Mar 13 00:39:07.502799 kernel: Initializing XFRM netlink socket Mar 13 00:39:07.927164 systemd-networkd[1425]: docker0: Link UP Mar 13 00:39:07.936287 dockerd[1848]: time="2026-03-13T00:39:07.936211185Z" level=info msg="Loading containers: done." Mar 13 00:39:07.957975 dockerd[1848]: time="2026-03-13T00:39:07.957898487Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:39:07.958211 dockerd[1848]: time="2026-03-13T00:39:07.958066258Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:39:07.958274 dockerd[1848]: time="2026-03-13T00:39:07.958212736Z" level=info msg="Initializing buildkit" Mar 13 00:39:07.999902 dockerd[1848]: time="2026-03-13T00:39:07.999773006Z" level=info msg="Completed buildkit initialization" Mar 13 00:39:08.012585 dockerd[1848]: time="2026-03-13T00:39:08.012477523Z" level=info msg="Daemon has completed initialization" Mar 13 00:39:08.013079 dockerd[1848]: time="2026-03-13T00:39:08.012784264Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:39:08.013008 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:39:08.911719 containerd[1541]: time="2026-03-13T00:39:08.911646861Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 13 00:39:09.434271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194245321.mount: Deactivated successfully. Mar 13 00:39:10.994019 containerd[1541]: time="2026-03-13T00:39:10.993941382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:10.995748 containerd[1541]: time="2026-03-13T00:39:10.995679402Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27075928" Mar 13 00:39:10.997611 containerd[1541]: time="2026-03-13T00:39:10.997546169Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:11.004361 containerd[1541]: time="2026-03-13T00:39:11.004260429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:11.006244 containerd[1541]: time="2026-03-13T00:39:11.005798056Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 2.09409115s" Mar 13 00:39:11.006244 containerd[1541]: time="2026-03-13T00:39:11.005857861Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 13 00:39:11.007161 containerd[1541]: time="2026-03-13T00:39:11.007127569Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 13 00:39:12.479426 containerd[1541]: time="2026-03-13T00:39:12.479347066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:12.481123 containerd[1541]: time="2026-03-13T00:39:12.481073398Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21166069" Mar 13 00:39:12.483024 containerd[1541]: time="2026-03-13T00:39:12.482958573Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:12.487065 containerd[1541]: time="2026-03-13T00:39:12.486987559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:12.488936 containerd[1541]: time="2026-03-13T00:39:12.488764021Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.481589627s" Mar 13 00:39:12.488936 containerd[1541]: time="2026-03-13T00:39:12.488811317Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 13 00:39:12.489592 containerd[1541]: time="2026-03-13T00:39:12.489546788Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 13 00:39:13.653674 containerd[1541]: time="2026-03-13T00:39:13.652765073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:13.654273 containerd[1541]: time="2026-03-13T00:39:13.653974284Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15730052" Mar 13 00:39:13.654890 containerd[1541]: time="2026-03-13T00:39:13.654827493Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:13.658821 containerd[1541]: time="2026-03-13T00:39:13.658711111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:13.661709 containerd[1541]: time="2026-03-13T00:39:13.660591104Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.170994017s" Mar 13 00:39:13.661709 containerd[1541]: time="2026-03-13T00:39:13.660656166Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 13 00:39:13.662188 containerd[1541]: time="2026-03-13T00:39:13.661985324Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 13 00:39:14.798184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542174552.mount: Deactivated successfully. Mar 13 00:39:14.800658 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:39:14.803147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:39:15.144022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:15.164566 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:39:15.278045 kubelet[2140]: E0313 00:39:15.277935 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:39:15.288156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:39:15.288396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:39:15.289292 systemd[1]: kubelet.service: Consumed 290ms CPU time, 110.3M memory peak. Mar 13 00:39:15.486929 containerd[1541]: time="2026-03-13T00:39:15.486755368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:15.488708 containerd[1541]: time="2026-03-13T00:39:15.488629326Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25862097" Mar 13 00:39:15.491139 containerd[1541]: time="2026-03-13T00:39:15.490802794Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:15.493966 containerd[1541]: time="2026-03-13T00:39:15.493899130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:15.495692 containerd[1541]: time="2026-03-13T00:39:15.495621012Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.833130327s" Mar 13 00:39:15.495692 containerd[1541]: time="2026-03-13T00:39:15.495676599Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 13 00:39:15.497057 containerd[1541]: time="2026-03-13T00:39:15.497003583Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 13 00:39:15.924341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount948399568.mount: Deactivated successfully. Mar 13 00:39:17.306236 containerd[1541]: time="2026-03-13T00:39:17.306157483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:17.307777 containerd[1541]: time="2026-03-13T00:39:17.307604433Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22389461" Mar 13 00:39:17.309772 containerd[1541]: time="2026-03-13T00:39:17.309687027Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:17.313353 containerd[1541]: time="2026-03-13T00:39:17.313288480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:17.315367 containerd[1541]: time="2026-03-13T00:39:17.314832990Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.817787052s" Mar 13 00:39:17.315367 containerd[1541]: time="2026-03-13T00:39:17.314945115Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 13 00:39:17.315925 containerd[1541]: time="2026-03-13T00:39:17.315856048Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 13 00:39:17.732156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2535151005.mount: Deactivated successfully. Mar 13 00:39:17.743088 containerd[1541]: time="2026-03-13T00:39:17.742990491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:17.744271 containerd[1541]: time="2026-03-13T00:39:17.744119022Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321428" Mar 13 00:39:17.745848 containerd[1541]: time="2026-03-13T00:39:17.745784440Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:17.750277 containerd[1541]: time="2026-03-13T00:39:17.750054387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:17.752428 containerd[1541]: time="2026-03-13T00:39:17.751957363Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 436.05563ms" Mar 13 00:39:17.752428 containerd[1541]: time="2026-03-13T00:39:17.752087537Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 13 00:39:17.752889 containerd[1541]: time="2026-03-13T00:39:17.752764631Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 13 00:39:18.215906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3345205910.mount: Deactivated successfully. Mar 13 00:39:19.555900 containerd[1541]: time="2026-03-13T00:39:19.555826129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:19.558499 containerd[1541]: time="2026-03-13T00:39:19.558189027Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22861753" Mar 13 00:39:19.560005 containerd[1541]: time="2026-03-13T00:39:19.559962575Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:19.566305 containerd[1541]: time="2026-03-13T00:39:19.566226836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:19.568441 containerd[1541]: time="2026-03-13T00:39:19.567966926Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.815162679s" Mar 13 00:39:19.568441 containerd[1541]: time="2026-03-13T00:39:19.568115687Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 13 00:39:24.234879 systemd[1]: Started sshd@7-10.128.0.75:22-218.17.21.99:37408.service - OpenSSH per-connection server daemon (218.17.21.99:37408). Mar 13 00:39:24.503362 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:24.504181 systemd[1]: kubelet.service: Consumed 290ms CPU time, 110.3M memory peak. Mar 13 00:39:24.507557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:39:24.552691 systemd[1]: Reload requested from client PID 2296 ('systemctl') (unit session-7.scope)... Mar 13 00:39:24.552715 systemd[1]: Reloading... Mar 13 00:39:24.784777 zram_generator::config[2346]: No configuration found. Mar 13 00:39:25.094289 systemd[1]: Reloading finished in 540 ms. Mar 13 00:39:25.184998 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:39:25.185140 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:39:25.185846 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:25.185964 systemd[1]: kubelet.service: Consumed 179ms CPU time, 98.5M memory peak. Mar 13 00:39:25.188654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:39:25.550098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:25.563686 (kubelet)[2394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:39:25.635166 kubelet[2394]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:39:25.635166 kubelet[2394]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:39:25.635695 kubelet[2394]: I0313 00:39:25.635229 2394 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:39:26.336628 kubelet[2394]: I0313 00:39:26.336559 2394 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:39:26.336628 kubelet[2394]: I0313 00:39:26.336596 2394 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:39:26.336628 kubelet[2394]: I0313 00:39:26.336637 2394 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:39:26.336910 kubelet[2394]: I0313 00:39:26.336652 2394 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:39:26.337114 kubelet[2394]: I0313 00:39:26.337074 2394 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:39:26.344347 kubelet[2394]: E0313 00:39:26.344266 2394 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:39:26.345302 kubelet[2394]: I0313 00:39:26.345221 2394 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:39:26.352061 kubelet[2394]: I0313 00:39:26.351997 2394 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:39:26.358381 kubelet[2394]: I0313 00:39:26.357925 2394 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:39:26.362275 kubelet[2394]: I0313 00:39:26.362218 2394 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:39:26.363271 kubelet[2394]: I0313 00:39:26.362933 2394 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:39:26.363827 kubelet[2394]: I0313 00:39:26.363768 2394 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:39:26.363827 kubelet[2394]: I0313 00:39:26.363814 2394 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:39:26.364019 kubelet[2394]: I0313 00:39:26.363981 2394 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:39:26.367377 kubelet[2394]: I0313 00:39:26.367323 2394 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:39:26.368169 kubelet[2394]: I0313 00:39:26.367655 2394 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:39:26.368169 kubelet[2394]: I0313 00:39:26.367681 2394 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:39:26.368169 kubelet[2394]: I0313 00:39:26.367721 2394 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:39:26.368169 kubelet[2394]: I0313 00:39:26.367761 2394 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:39:26.370816 kubelet[2394]: E0313 00:39:26.370779 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:39:26.371087 kubelet[2394]: E0313 00:39:26.371059 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136&limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:39:26.371808 kubelet[2394]: I0313 00:39:26.371785 2394 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:39:26.372751 kubelet[2394]: I0313 00:39:26.372709 2394 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:39:26.372881 kubelet[2394]: I0313 00:39:26.372867 2394 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:39:26.372992 kubelet[2394]: W0313 00:39:26.372982 2394 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:39:26.391884 kubelet[2394]: I0313 00:39:26.391847 2394 server.go:1262] "Started kubelet" Mar 13 00:39:26.393347 kubelet[2394]: I0313 00:39:26.393289 2394 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:39:26.396305 kubelet[2394]: I0313 00:39:26.395645 2394 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:39:26.401662 kubelet[2394]: E0313 00:39:26.398502 2394 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136.189c3fad32d41534 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,UID:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,},FirstTimestamp:2026-03-13 00:39:26.391784756 +0000 UTC m=+0.820931419,LastTimestamp:2026-03-13 00:39:26.391784756 +0000 UTC m=+0.820931419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,}" Mar 13 00:39:26.402557 kubelet[2394]: I0313 00:39:26.402497 2394 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:39:26.402639 kubelet[2394]: I0313 00:39:26.402570 2394 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:39:26.403561 kubelet[2394]: I0313 00:39:26.402929 2394 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:39:26.404983 kubelet[2394]: I0313 00:39:26.404963 2394 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:39:26.405210 kubelet[2394]: E0313 00:39:26.405072 2394 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:39:26.405489 kubelet[2394]: I0313 00:39:26.405435 2394 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:39:26.406251 kubelet[2394]: I0313 00:39:26.406057 2394 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:39:26.407647 kubelet[2394]: I0313 00:39:26.406820 2394 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:39:26.407760 kubelet[2394]: E0313 00:39:26.406971 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:39:26.408750 kubelet[2394]: I0313 00:39:26.407841 2394 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:39:26.408750 kubelet[2394]: E0313 00:39:26.408559 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:39:26.408750 kubelet[2394]: E0313 00:39:26.408659 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136?timeout=10s\": dial tcp 10.128.0.75:6443: connect: connection refused" interval="200ms" Mar 13 00:39:26.413288 kubelet[2394]: I0313 00:39:26.413258 2394 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:39:26.413288 kubelet[2394]: I0313 00:39:26.413286 2394 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:39:26.413456 kubelet[2394]: I0313 00:39:26.413389 2394 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:39:26.441303 kubelet[2394]: I0313 00:39:26.441249 2394 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:39:26.441303 kubelet[2394]: I0313 00:39:26.441298 2394 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:39:26.441524 kubelet[2394]: I0313 00:39:26.441322 2394 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:39:26.443780 kubelet[2394]: I0313 00:39:26.443719 2394 policy_none.go:49] "None policy: Start" Mar 13 00:39:26.443905 kubelet[2394]: I0313 00:39:26.443806 2394 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:39:26.443905 kubelet[2394]: I0313 00:39:26.443827 2394 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:39:26.446483 kubelet[2394]: I0313 00:39:26.446195 2394 policy_none.go:47] "Start" Mar 13 00:39:26.454056 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:39:26.465300 kubelet[2394]: I0313 00:39:26.465087 2394 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:39:26.472261 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:39:26.474540 kubelet[2394]: I0313 00:39:26.474077 2394 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:39:26.474540 kubelet[2394]: I0313 00:39:26.474109 2394 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:39:26.474540 kubelet[2394]: I0313 00:39:26.474146 2394 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:39:26.474540 kubelet[2394]: E0313 00:39:26.474221 2394 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:39:26.476524 kubelet[2394]: E0313 00:39:26.476497 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:39:26.484347 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:39:26.497772 kubelet[2394]: E0313 00:39:26.497190 2394 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:39:26.497772 kubelet[2394]: I0313 00:39:26.497503 2394 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:39:26.497772 kubelet[2394]: I0313 00:39:26.497523 2394 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:39:26.500034 kubelet[2394]: I0313 00:39:26.499996 2394 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:39:26.501899 kubelet[2394]: E0313 00:39:26.501868 2394 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:39:26.502094 kubelet[2394]: E0313 00:39:26.502073 2394 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:39:26.599785 systemd[1]: Created slice kubepods-burstable-podfc4a2567dc070343a9068d099c918271.slice - libcontainer container kubepods-burstable-podfc4a2567dc070343a9068d099c918271.slice. Mar 13 00:39:26.604655 kubelet[2394]: I0313 00:39:26.604618 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:26.605142 kubelet[2394]: E0313 00:39:26.605108 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.75:6443/api/v1/nodes\": dial tcp 10.128.0.75:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:26.610814 kubelet[2394]: E0313 00:39:26.610220 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136?timeout=10s\": dial tcp 10.128.0.75:6443: connect: connection refused" interval="400ms" Mar 13 00:39:26.610814 kubelet[2394]: E0313 00:39:26.610800 2394 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:26.619762 systemd[1]: Created slice kubepods-burstable-pod485edd3c6dadd5db6059a9f95ede9b18.slice - libcontainer container kubepods-burstable-pod485edd3c6dadd5db6059a9f95ede9b18.slice. Mar 13 00:39:26.624831 kubelet[2394]: E0313 00:39:26.624792 2394 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:26.630074 systemd[1]: Created slice kubepods-burstable-pod739f4adc468fea4ceec2008039f78ce1.slice - libcontainer container kubepods-burstable-pod739f4adc468fea4ceec2008039f78ce1.slice. Mar 13 00:39:26.633047 kubelet[2394]: E0313 00:39:26.632995 2394 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:26.709478 kubelet[2394]: I0313 00:39:26.709367 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/485edd3c6dadd5db6059a9f95ede9b18-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"485edd3c6dadd5db6059a9f95ede9b18\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:26.709478 kubelet[2394]: I0313 00:39:26.709502 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/485edd3c6dadd5db6059a9f95ede9b18-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"485edd3c6dadd5db6059a9f95ede9b18\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:26.710170 kubelet[2394]: I0313 00:39:26.709534 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc4a2567dc070343a9068d099c918271-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"fc4a2567dc070343a9068d099c918271\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:26.710170 kubelet[2394]: I0313 00:39:26.709591 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/739f4adc468fea4ceec2008039f78ce1-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"739f4adc468fea4ceec2008039f78ce1\") " pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:26.710170 kubelet[2394]: I0313 00:39:26.709654 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc4a2567dc070343a9068d099c918271-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"fc4a2567dc070343a9068d099c918271\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:26.710170 kubelet[2394]: I0313 00:39:26.709784 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc4a2567dc070343a9068d099c918271-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"fc4a2567dc070343a9068d099c918271\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:26.710301 kubelet[2394]: I0313 00:39:26.709817 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/485edd3c6dadd5db6059a9f95ede9b18-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"485edd3c6dadd5db6059a9f95ede9b18\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:26.710301 kubelet[2394]: I0313 00:39:26.709881 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/485edd3c6dadd5db6059a9f95ede9b18-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"485edd3c6dadd5db6059a9f95ede9b18\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:26.710301 kubelet[2394]: I0313 00:39:26.709938 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/485edd3c6dadd5db6059a9f95ede9b18-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"485edd3c6dadd5db6059a9f95ede9b18\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:26.810753 kubelet[2394]: I0313 00:39:26.810295 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:26.811112 kubelet[2394]: E0313 00:39:26.811076 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.75:6443/api/v1/nodes\": dial tcp 10.128.0.75:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:26.916344 containerd[1541]: time="2026-03-13T00:39:26.916170757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,Uid:fc4a2567dc070343a9068d099c918271,Namespace:kube-system,Attempt:0,}" Mar 13 00:39:26.928803 containerd[1541]: time="2026-03-13T00:39:26.928661274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,Uid:485edd3c6dadd5db6059a9f95ede9b18,Namespace:kube-system,Attempt:0,}" Mar 13 00:39:26.937492 containerd[1541]: time="2026-03-13T00:39:26.937389486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,Uid:739f4adc468fea4ceec2008039f78ce1,Namespace:kube-system,Attempt:0,}" Mar 13 00:39:27.011807 kubelet[2394]: E0313 00:39:27.011642 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136?timeout=10s\": dial tcp 10.128.0.75:6443: connect: connection refused" interval="800ms" Mar 13 00:39:27.219231 kubelet[2394]: I0313 00:39:27.218670 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:27.219759 kubelet[2394]: E0313 00:39:27.219433 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.75:6443/api/v1/nodes\": dial tcp 10.128.0.75:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:27.325445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2304794848.mount: Deactivated successfully. Mar 13 00:39:27.336658 containerd[1541]: time="2026-03-13T00:39:27.336581587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:39:27.339248 containerd[1541]: time="2026-03-13T00:39:27.339151682Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321348" Mar 13 00:39:27.340339 containerd[1541]: time="2026-03-13T00:39:27.340251793Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:39:27.341859 containerd[1541]: time="2026-03-13T00:39:27.341801185Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:39:27.343157 containerd[1541]: time="2026-03-13T00:39:27.343105261Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:39:27.346661 containerd[1541]: time="2026-03-13T00:39:27.346585553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:39:27.348753 containerd[1541]: time="2026-03-13T00:39:27.347525282Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 416.26219ms" Mar 13 00:39:27.349606 containerd[1541]: time="2026-03-13T00:39:27.349545415Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 408.36936ms" Mar 13 00:39:27.392277 kubelet[2394]: E0313 00:39:27.391955 2394 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136.189c3fad32d41534 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,UID:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,},FirstTimestamp:2026-03-13 00:39:26.391784756 +0000 UTC m=+0.820931419,LastTimestamp:2026-03-13 00:39:26.391784756 +0000 UTC m=+0.820931419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,}" Mar 13 00:39:27.403762 containerd[1541]: time="2026-03-13T00:39:27.403389063Z" level=info msg="connecting to shim 0d83a9948b33e6e2220f8f92f776614ad2437f0e599448c3b5cbe8c16f74a516" address="unix:///run/containerd/s/392a0f030fa7d1d92438b807dac5b4325ed4f198d58cb3310c597df7f46472fb" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:27.414763 containerd[1541]: time="2026-03-13T00:39:27.413041433Z" level=info msg="connecting to shim 5f99dbadccb1b5ae47595f24765a0c8f782b66c703df4003d99f54995ba5080b" address="unix:///run/containerd/s/50b982da8c11c18c058858208a0431c0858c8c6aab846f10d0b90095ae819a06" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:27.460040 systemd[1]: Started cri-containerd-0d83a9948b33e6e2220f8f92f776614ad2437f0e599448c3b5cbe8c16f74a516.scope - libcontainer container 0d83a9948b33e6e2220f8f92f776614ad2437f0e599448c3b5cbe8c16f74a516. Mar 13 00:39:27.477864 systemd[1]: Started cri-containerd-5f99dbadccb1b5ae47595f24765a0c8f782b66c703df4003d99f54995ba5080b.scope - libcontainer container 5f99dbadccb1b5ae47595f24765a0c8f782b66c703df4003d99f54995ba5080b. Mar 13 00:39:27.481133 kubelet[2394]: E0313 00:39:27.480957 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136&limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:39:27.527768 kubelet[2394]: E0313 00:39:27.526616 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:39:27.576482 containerd[1541]: time="2026-03-13T00:39:27.576245907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,Uid:485edd3c6dadd5db6059a9f95ede9b18,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d83a9948b33e6e2220f8f92f776614ad2437f0e599448c3b5cbe8c16f74a516\"" Mar 13 00:39:27.581750 kubelet[2394]: E0313 00:39:27.581244 2394 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4e" Mar 13 00:39:27.590068 containerd[1541]: time="2026-03-13T00:39:27.589993608Z" level=info msg="CreateContainer within sandbox \"0d83a9948b33e6e2220f8f92f776614ad2437f0e599448c3b5cbe8c16f74a516\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:39:27.592217 containerd[1541]: time="2026-03-13T00:39:27.592051362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,Uid:739f4adc468fea4ceec2008039f78ce1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f99dbadccb1b5ae47595f24765a0c8f782b66c703df4003d99f54995ba5080b\"" Mar 13 00:39:27.594848 kubelet[2394]: E0313 00:39:27.594800 2394 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc8" Mar 13 00:39:27.599639 containerd[1541]: time="2026-03-13T00:39:27.599590635Z" level=info msg="CreateContainer within sandbox \"5f99dbadccb1b5ae47595f24765a0c8f782b66c703df4003d99f54995ba5080b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:39:27.605688 containerd[1541]: time="2026-03-13T00:39:27.605636008Z" level=info msg="Container 5561574ec5b0dbccb18d37745d9920b91967cc843e122108bb79445d562ee20d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:27.620015 containerd[1541]: time="2026-03-13T00:39:27.619933789Z" level=info msg="CreateContainer within sandbox \"0d83a9948b33e6e2220f8f92f776614ad2437f0e599448c3b5cbe8c16f74a516\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5561574ec5b0dbccb18d37745d9920b91967cc843e122108bb79445d562ee20d\"" Mar 13 00:39:27.620622 containerd[1541]: time="2026-03-13T00:39:27.620536672Z" level=info msg="Container b208b120cd767163eda8ec59e342e4cbb08551bcc34cc7e37ff2b7280b02e314: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:27.622201 containerd[1541]: time="2026-03-13T00:39:27.622128462Z" level=info msg="StartContainer for \"5561574ec5b0dbccb18d37745d9920b91967cc843e122108bb79445d562ee20d\"" Mar 13 00:39:27.624303 containerd[1541]: time="2026-03-13T00:39:27.624201151Z" level=info msg="connecting to shim 5561574ec5b0dbccb18d37745d9920b91967cc843e122108bb79445d562ee20d" address="unix:///run/containerd/s/392a0f030fa7d1d92438b807dac5b4325ed4f198d58cb3310c597df7f46472fb" protocol=ttrpc version=3 Mar 13 00:39:27.635468 containerd[1541]: time="2026-03-13T00:39:27.635377423Z" level=info msg="CreateContainer within sandbox \"5f99dbadccb1b5ae47595f24765a0c8f782b66c703df4003d99f54995ba5080b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b208b120cd767163eda8ec59e342e4cbb08551bcc34cc7e37ff2b7280b02e314\"" Mar 13 00:39:27.636352 containerd[1541]: time="2026-03-13T00:39:27.636299723Z" level=info msg="StartContainer for \"b208b120cd767163eda8ec59e342e4cbb08551bcc34cc7e37ff2b7280b02e314\"" Mar 13 00:39:27.639610 containerd[1541]: time="2026-03-13T00:39:27.639468244Z" level=info msg="connecting to shim b208b120cd767163eda8ec59e342e4cbb08551bcc34cc7e37ff2b7280b02e314" address="unix:///run/containerd/s/50b982da8c11c18c058858208a0431c0858c8c6aab846f10d0b90095ae819a06" protocol=ttrpc version=3 Mar 13 00:39:27.670202 systemd[1]: Started cri-containerd-5561574ec5b0dbccb18d37745d9920b91967cc843e122108bb79445d562ee20d.scope - libcontainer container 5561574ec5b0dbccb18d37745d9920b91967cc843e122108bb79445d562ee20d. Mar 13 00:39:27.682346 systemd[1]: Started cri-containerd-b208b120cd767163eda8ec59e342e4cbb08551bcc34cc7e37ff2b7280b02e314.scope - libcontainer container b208b120cd767163eda8ec59e342e4cbb08551bcc34cc7e37ff2b7280b02e314. Mar 13 00:39:27.791629 containerd[1541]: time="2026-03-13T00:39:27.790253660Z" level=info msg="StartContainer for \"5561574ec5b0dbccb18d37745d9920b91967cc843e122108bb79445d562ee20d\" returns successfully" Mar 13 00:39:27.801679 containerd[1541]: time="2026-03-13T00:39:27.801440802Z" level=info msg="StartContainer for \"b208b120cd767163eda8ec59e342e4cbb08551bcc34cc7e37ff2b7280b02e314\" returns successfully" Mar 13 00:39:27.812813 kubelet[2394]: E0313 00:39:27.812429 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136?timeout=10s\": dial tcp 10.128.0.75:6443: connect: connection refused" interval="1.6s" Mar 13 00:39:27.890763 kubelet[2394]: E0313 00:39:27.890213 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:39:27.986984 kubelet[2394]: E0313 00:39:27.986922 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:39:28.025562 kubelet[2394]: I0313 00:39:28.025497 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:28.026351 kubelet[2394]: E0313 00:39:28.026312 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.75:6443/api/v1/nodes\": dial tcp 10.128.0.75:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:28.398873 kubelet[2394]: E0313 00:39:28.398800 2394 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:39:28.502213 kubelet[2394]: E0313 00:39:28.501806 2394 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:28.502213 kubelet[2394]: E0313 00:39:28.502017 2394 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:29.416779 kubelet[2394]: E0313 00:39:29.414785 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136?timeout=10s\": dial tcp 10.128.0.75:6443: connect: connection refused" interval="3.2s" Mar 13 00:39:29.433980 kubelet[2394]: E0313 00:39:29.433907 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136&limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:39:29.508757 kubelet[2394]: E0313 00:39:29.508639 2394 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:29.633104 kubelet[2394]: I0313 00:39:29.633071 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:29.633798 kubelet[2394]: E0313 00:39:29.633724 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.75:6443/api/v1/nodes\": dial tcp 10.128.0.75:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:29.660755 kubelet[2394]: E0313 00:39:29.660320 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:39:30.065770 kubelet[2394]: E0313 00:39:30.065671 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:39:30.932028 kubelet[2394]: E0313 00:39:30.931962 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:39:32.028284 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 13 00:39:32.618938 kubelet[2394]: E0313 00:39:32.618870 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136?timeout=10s\": dial tcp 10.128.0.75:6443: connect: connection refused" interval="6.4s" Mar 13 00:39:32.700713 kubelet[2394]: E0313 00:39:32.700650 2394 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:39:32.839320 kubelet[2394]: I0313 00:39:32.839257 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:32.839918 kubelet[2394]: E0313 00:39:32.839716 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.75:6443/api/v1/nodes\": dial tcp 10.128.0.75:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:32.913339 kubelet[2394]: E0313 00:39:32.913174 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136&limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:39:34.180778 kubelet[2394]: E0313 00:39:34.180675 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:39:34.442330 kubelet[2394]: E0313 00:39:34.442200 2394 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:35.060790 kubelet[2394]: E0313 00:39:35.060681 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:39:35.413711 kubelet[2394]: E0313 00:39:35.413533 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:39:36.502771 kubelet[2394]: E0313 00:39:36.502693 2394 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:39:37.393139 kubelet[2394]: E0313 00:39:37.393006 2394 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136.189c3fad32d41534 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,UID:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,},FirstTimestamp:2026-03-13 00:39:26.391784756 +0000 UTC m=+0.820931419,LastTimestamp:2026-03-13 00:39:26.391784756 +0000 UTC m=+0.820931419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,}" Mar 13 00:39:37.402345 kubelet[2394]: E0313 00:39:37.402292 2394 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:37.527772 kubelet[2394]: E0313 00:39:37.527691 2394 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:39.020470 kubelet[2394]: E0313 00:39:39.020366 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136?timeout=10s\": dial tcp 10.128.0.75:6443: connect: connection refused" interval="7s" Mar 13 00:39:39.248580 kubelet[2394]: I0313 00:39:39.248330 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:39.248886 kubelet[2394]: E0313 00:39:39.248833 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.75:6443/api/v1/nodes\": dial tcp 10.128.0.75:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:40.714693 kubelet[2394]: E0313 00:39:40.714597 2394 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:39:41.075357 kubelet[2394]: E0313 00:39:41.075101 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136&limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:39:42.450426 systemd[1]: Started sshd@8-10.128.0.75:22-116.203.152.173:44094.service - OpenSSH per-connection server daemon (116.203.152.173:44094). Mar 13 00:39:42.814577 kubelet[2394]: E0313 00:39:42.814422 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:39:43.145569 sshd[2599]: Invalid user yan from 116.203.152.173 port 44094 Mar 13 00:39:43.279448 sshd[2599]: Received disconnect from 116.203.152.173 port 44094:11: Bye Bye [preauth] Mar 13 00:39:43.279448 sshd[2599]: Disconnected from invalid user yan 116.203.152.173 port 44094 [preauth] Mar 13 00:39:43.282934 systemd[1]: sshd@8-10.128.0.75:22-116.203.152.173:44094.service: Deactivated successfully. Mar 13 00:39:44.319969 kubelet[2394]: E0313 00:39:44.319901 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:39:44.448590 kubelet[2394]: E0313 00:39:44.448500 2394 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:45.918920 update_engine[1524]: I20260313 00:39:45.918817 1524 update_attempter.cc:509] Updating boot flags... Mar 13 00:39:46.021211 kubelet[2394]: E0313 00:39:46.021133 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136?timeout=10s\": dial tcp 10.128.0.75:6443: connect: connection refused" interval="7s" Mar 13 00:39:46.256899 kubelet[2394]: I0313 00:39:46.256419 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:46.258040 kubelet[2394]: E0313 00:39:46.257712 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.75:6443/api/v1/nodes\": dial tcp 10.128.0.75:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:46.504612 kubelet[2394]: E0313 00:39:46.503714 2394 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:39:47.229719 kubelet[2394]: E0313 00:39:47.229619 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:39:47.394594 kubelet[2394]: E0313 00:39:47.394412 2394 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136.189c3fad32d41534 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,UID:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,},FirstTimestamp:2026-03-13 00:39:26.391784756 +0000 UTC m=+0.820931419,LastTimestamp:2026-03-13 00:39:26.391784756 +0000 UTC m=+0.820931419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,}" Mar 13 00:39:52.691338 systemd[1]: Started sshd@9-10.128.0.75:22-183.250.89.44:49737.service - OpenSSH per-connection server daemon (183.250.89.44:49737). Mar 13 00:39:53.022759 kubelet[2394]: E0313 00:39:53.022532 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136?timeout=10s\": dial tcp 10.128.0.75:6443: connect: connection refused" interval="7s" Mar 13 00:39:53.265249 kubelet[2394]: I0313 00:39:53.265198 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:53.265668 kubelet[2394]: E0313 00:39:53.265609 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.75:6443/api/v1/nodes\": dial tcp 10.128.0.75:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:56.503984 kubelet[2394]: E0313 00:39:56.503896 2394 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:39:56.921698 containerd[1541]: time="2026-03-13T00:39:56.921610736Z" level=info msg="fetch failed" error="failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.10\": dial tcp 34.96.108.209:443: i/o timeout" host=registry.k8s.io Mar 13 00:39:56.923532 containerd[1541]: time="2026-03-13T00:39:56.923463104Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:39:56.925176 containerd[1541]: time="2026-03-13T00:39:56.925111893Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,Uid:fc4a2567dc070343a9068d099c918271,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to start sandbox \"6809515bb9d078f2a3b8c892cee98de19ffeb17204447b6259304a40c77f3050\": failed to get sandbox image \"registry.k8s.io/pause:3.10\": failed to pull image \"registry.k8s.io/pause:3.10\": failed to pull and unpack image \"registry.k8s.io/pause:3.10\": failed to resolve reference \"registry.k8s.io/pause:3.10\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.10\": dial tcp 34.96.108.209:443: i/o timeout" Mar 13 00:39:56.925583 kubelet[2394]: E0313 00:39:56.925502 2394 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to start sandbox \"6809515bb9d078f2a3b8c892cee98de19ffeb17204447b6259304a40c77f3050\": failed to get sandbox image \"registry.k8s.io/pause:3.10\": failed to pull image \"registry.k8s.io/pause:3.10\": failed to pull and unpack image \"registry.k8s.io/pause:3.10\": failed to resolve reference \"registry.k8s.io/pause:3.10\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.10\": dial tcp 34.96.108.209:443: i/o timeout" Mar 13 00:39:56.925857 kubelet[2394]: E0313 00:39:56.925597 2394 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to start sandbox \"6809515bb9d078f2a3b8c892cee98de19ffeb17204447b6259304a40c77f3050\": failed to get sandbox image \"registry.k8s.io/pause:3.10\": failed to pull image \"registry.k8s.io/pause:3.10\": failed to pull and unpack image \"registry.k8s.io/pause:3.10\": failed to resolve reference \"registry.k8s.io/pause:3.10\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.10\": dial tcp 34.96.108.209:443: i/o timeout" pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:56.925857 kubelet[2394]: E0313 00:39:56.925660 2394 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to start sandbox \"6809515bb9d078f2a3b8c892cee98de19ffeb17204447b6259304a40c77f3050\": failed to get sandbox image \"registry.k8s.io/pause:3.10\": failed to pull image \"registry.k8s.io/pause:3.10\": failed to pull and unpack image \"registry.k8s.io/pause:3.10\": failed to resolve reference \"registry.k8s.io/pause:3.10\": failed to do request: Head \"https://registry.k8s.io/v2/pause/manifests/3.10\": dial tcp 34.96.108.209:443: i/o timeout" pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:39:56.926094 kubelet[2394]: E0313 00:39:56.925792 2394 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136_kube-system(fc4a2567dc070343a9068d099c918271)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136_kube-system(fc4a2567dc070343a9068d099c918271)\\\": rpc error: code = Unknown desc = failed to start sandbox \\\"6809515bb9d078f2a3b8c892cee98de19ffeb17204447b6259304a40c77f3050\\\": failed to get sandbox image \\\"registry.k8s.io/pause:3.10\\\": failed to pull image \\\"registry.k8s.io/pause:3.10\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.10\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.10\\\": failed to do request: Head \\\"https://registry.k8s.io/v2/pause/manifests/3.10\\\": dial tcp 34.96.108.209:443: i/o timeout\"" pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" podUID="fc4a2567dc070343a9068d099c918271" Mar 13 00:39:57.396197 kubelet[2394]: E0313 00:39:57.396032 2394 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136.189c3fad32d41534 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,UID:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,},FirstTimestamp:2026-03-13 00:39:26.391784756 +0000 UTC m=+0.820931419,LastTimestamp:2026-03-13 00:39:26.391784756 +0000 UTC m=+0.820931419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,}" Mar 13 00:39:57.690850 kubelet[2394]: E0313 00:39:57.690645 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:39:58.006815 kubelet[2394]: E0313 00:39:58.006626 2394 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:39:58.006815 kubelet[2394]: E0313 00:39:58.006673 2394 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:40:00.023829 kubelet[2394]: E0313 00:40:00.023692 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136?timeout=10s\": dial tcp 10.128.0.75:6443: connect: connection refused" interval="7s" Mar 13 00:40:00.271630 kubelet[2394]: I0313 00:40:00.271587 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:00.272280 kubelet[2394]: E0313 00:40:00.272225 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.75:6443/api/v1/nodes\": dial tcp 10.128.0.75:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:02.911345 systemd[1]: Started sshd@10-10.128.0.75:22-20.26.135.100:41282.service - OpenSSH per-connection server daemon (20.26.135.100:41282). Mar 13 00:40:03.521119 sshd[2633]: Invalid user pilot from 20.26.135.100 port 41282 Mar 13 00:40:03.634825 sshd[2633]: Received disconnect from 20.26.135.100 port 41282:11: Bye Bye [preauth] Mar 13 00:40:03.634825 sshd[2633]: Disconnected from invalid user pilot 20.26.135.100 port 41282 [preauth] Mar 13 00:40:03.638445 systemd[1]: sshd@10-10.128.0.75:22-20.26.135.100:41282.service: Deactivated successfully. Mar 13 00:40:05.431115 kubelet[2394]: E0313 00:40:05.431051 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:40:06.118177 kubelet[2394]: E0313 00:40:06.118115 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136&limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:40:06.505048 kubelet[2394]: E0313 00:40:06.504893 2394 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:07.024892 kubelet[2394]: E0313 00:40:07.024825 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136?timeout=10s\": dial tcp 10.128.0.75:6443: connect: connection refused" interval="7s" Mar 13 00:40:07.279594 kubelet[2394]: I0313 00:40:07.279458 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:07.280161 kubelet[2394]: E0313 00:40:07.280119 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.75:6443/api/v1/nodes\": dial tcp 10.128.0.75:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:07.397648 kubelet[2394]: E0313 00:40:07.397508 2394 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136.189c3fad32d41534 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,UID:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,},FirstTimestamp:2026-03-13 00:39:26.391784756 +0000 UTC m=+0.820931419,LastTimestamp:2026-03-13 00:39:26.391784756 +0000 UTC m=+0.820931419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,}" Mar 13 00:40:08.336488 kubelet[2394]: E0313 00:40:08.336430 2394 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.75:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:40:11.482553 kubelet[2394]: E0313 00:40:11.482170 2394 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:11.488755 containerd[1541]: time="2026-03-13T00:40:11.487972665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,Uid:fc4a2567dc070343a9068d099c918271,Namespace:kube-system,Attempt:0,}" Mar 13 00:40:11.521485 containerd[1541]: time="2026-03-13T00:40:11.521427365Z" level=info msg="connecting to shim 949cadcfcf8aaf2f4e5f146453f8aff396a0bc355e4fe0896068df065f09ff93" address="unix:///run/containerd/s/8f5d5a91f7462ca4413780692cdc5295e4204db39577aa59b401bee22fb0bb89" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:11.556008 systemd[1]: Started cri-containerd-949cadcfcf8aaf2f4e5f146453f8aff396a0bc355e4fe0896068df065f09ff93.scope - libcontainer container 949cadcfcf8aaf2f4e5f146453f8aff396a0bc355e4fe0896068df065f09ff93. Mar 13 00:40:11.623317 containerd[1541]: time="2026-03-13T00:40:11.623250723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136,Uid:fc4a2567dc070343a9068d099c918271,Namespace:kube-system,Attempt:0,} returns sandbox id \"949cadcfcf8aaf2f4e5f146453f8aff396a0bc355e4fe0896068df065f09ff93\"" Mar 13 00:40:11.625632 kubelet[2394]: E0313 00:40:11.625593 2394 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc8" Mar 13 00:40:11.630273 containerd[1541]: time="2026-03-13T00:40:11.630225044Z" level=info msg="CreateContainer within sandbox \"949cadcfcf8aaf2f4e5f146453f8aff396a0bc355e4fe0896068df065f09ff93\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:40:11.653588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4240448063.mount: Deactivated successfully. Mar 13 00:40:11.654252 containerd[1541]: time="2026-03-13T00:40:11.654200509Z" level=info msg="Container 14203b2d9c73c3eb16d65df1cf524255319eb739047d95ace61d623ea9450eac: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:11.669958 containerd[1541]: time="2026-03-13T00:40:11.669879529Z" level=info msg="CreateContainer within sandbox \"949cadcfcf8aaf2f4e5f146453f8aff396a0bc355e4fe0896068df065f09ff93\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"14203b2d9c73c3eb16d65df1cf524255319eb739047d95ace61d623ea9450eac\"" Mar 13 00:40:11.671766 containerd[1541]: time="2026-03-13T00:40:11.671343009Z" level=info msg="StartContainer for \"14203b2d9c73c3eb16d65df1cf524255319eb739047d95ace61d623ea9450eac\"" Mar 13 00:40:11.672698 containerd[1541]: time="2026-03-13T00:40:11.672641563Z" level=info msg="connecting to shim 14203b2d9c73c3eb16d65df1cf524255319eb739047d95ace61d623ea9450eac" address="unix:///run/containerd/s/8f5d5a91f7462ca4413780692cdc5295e4204db39577aa59b401bee22fb0bb89" protocol=ttrpc version=3 Mar 13 00:40:11.703068 systemd[1]: Started cri-containerd-14203b2d9c73c3eb16d65df1cf524255319eb739047d95ace61d623ea9450eac.scope - libcontainer container 14203b2d9c73c3eb16d65df1cf524255319eb739047d95ace61d623ea9450eac. Mar 13 00:40:11.782529 containerd[1541]: time="2026-03-13T00:40:11.782359168Z" level=info msg="StartContainer for \"14203b2d9c73c3eb16d65df1cf524255319eb739047d95ace61d623ea9450eac\" returns successfully" Mar 13 00:40:12.613760 kubelet[2394]: E0313 00:40:12.613674 2394 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:13.617754 kubelet[2394]: E0313 00:40:13.617547 2394 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:14.222899 kubelet[2394]: E0313 00:40:14.222802 2394 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:14.288009 kubelet[2394]: I0313 00:40:14.287903 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:14.308900 kubelet[2394]: I0313 00:40:14.306787 2394 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:14.308900 kubelet[2394]: E0313 00:40:14.308787 2394 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\": node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:14.328354 kubelet[2394]: E0313 00:40:14.328312 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:14.429642 kubelet[2394]: E0313 00:40:14.429559 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:14.530098 kubelet[2394]: E0313 00:40:14.529930 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:14.630136 kubelet[2394]: E0313 00:40:14.630074 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:14.731138 kubelet[2394]: E0313 00:40:14.731050 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:14.831300 kubelet[2394]: E0313 00:40:14.831191 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:14.931555 kubelet[2394]: E0313 00:40:14.931457 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:15.031762 kubelet[2394]: E0313 00:40:15.031677 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:15.133482 kubelet[2394]: E0313 00:40:15.132833 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:15.233823 kubelet[2394]: E0313 00:40:15.233711 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:15.334452 kubelet[2394]: E0313 00:40:15.334374 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:15.435541 kubelet[2394]: E0313 00:40:15.434975 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:15.535448 kubelet[2394]: E0313 00:40:15.535380 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:15.636100 kubelet[2394]: E0313 00:40:15.636035 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:15.736895 kubelet[2394]: E0313 00:40:15.736666 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:15.837816 kubelet[2394]: E0313 00:40:15.837752 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:15.938660 kubelet[2394]: E0313 00:40:15.938592 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:16.038978 kubelet[2394]: E0313 00:40:16.038820 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:16.139931 kubelet[2394]: E0313 00:40:16.139717 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:16.240617 kubelet[2394]: E0313 00:40:16.240513 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:16.341629 kubelet[2394]: E0313 00:40:16.341566 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:16.442208 kubelet[2394]: E0313 00:40:16.441948 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:16.505201 kubelet[2394]: E0313 00:40:16.505091 2394 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:16.543423 kubelet[2394]: E0313 00:40:16.543129 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:16.564979 systemd[1]: Reload requested from client PID 2724 ('systemctl') (unit session-7.scope)... Mar 13 00:40:16.565005 systemd[1]: Reloading... Mar 13 00:40:16.644133 kubelet[2394]: E0313 00:40:16.643939 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:16.706773 zram_generator::config[2768]: No configuration found. Mar 13 00:40:16.745084 kubelet[2394]: E0313 00:40:16.745010 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:16.845274 kubelet[2394]: E0313 00:40:16.845216 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:16.946011 kubelet[2394]: E0313 00:40:16.945858 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:17.046607 kubelet[2394]: E0313 00:40:17.046413 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:17.147625 kubelet[2394]: E0313 00:40:17.147513 2394 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" Mar 13 00:40:17.159335 kubelet[2394]: E0313 00:40:17.158984 2394 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" not found" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:17.173707 systemd[1]: Reloading finished in 607 ms. Mar 13 00:40:17.217349 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:40:17.234691 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:40:17.235329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:40:17.235585 systemd[1]: kubelet.service: Consumed 1.929s CPU time, 128.9M memory peak. Mar 13 00:40:17.241628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:40:17.559169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:40:17.574774 (kubelet)[2820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:40:17.660256 kubelet[2820]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:40:17.660256 kubelet[2820]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:40:17.660256 kubelet[2820]: I0313 00:40:17.660026 2820 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:40:17.671603 kubelet[2820]: I0313 00:40:17.671507 2820 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:40:17.671603 kubelet[2820]: I0313 00:40:17.671545 2820 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:40:17.671603 kubelet[2820]: I0313 00:40:17.671585 2820 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:40:17.671603 kubelet[2820]: I0313 00:40:17.671603 2820 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:40:17.672266 kubelet[2820]: I0313 00:40:17.671969 2820 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:40:17.673676 kubelet[2820]: I0313 00:40:17.673650 2820 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:40:17.678754 kubelet[2820]: I0313 00:40:17.678266 2820 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:40:17.697768 kubelet[2820]: I0313 00:40:17.696586 2820 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:40:17.702771 kubelet[2820]: I0313 00:40:17.702214 2820 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:40:17.702771 kubelet[2820]: I0313 00:40:17.702583 2820 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:40:17.703201 kubelet[2820]: I0313 00:40:17.702633 2820 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:40:17.703475 kubelet[2820]: I0313 00:40:17.703453 2820 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:40:17.703591 kubelet[2820]: I0313 00:40:17.703577 2820 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:40:17.703707 kubelet[2820]: I0313 00:40:17.703693 2820 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:40:17.704134 kubelet[2820]: I0313 00:40:17.704112 2820 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:40:17.704602 kubelet[2820]: I0313 00:40:17.704584 2820 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:40:17.705432 kubelet[2820]: I0313 00:40:17.705398 2820 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:40:17.705599 kubelet[2820]: I0313 00:40:17.705582 2820 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:40:17.705699 kubelet[2820]: I0313 00:40:17.705685 2820 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:40:17.710895 kubelet[2820]: I0313 00:40:17.710862 2820 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:40:17.714760 kubelet[2820]: I0313 00:40:17.713910 2820 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:40:17.714760 kubelet[2820]: I0313 00:40:17.713971 2820 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:40:17.725997 sudo[2834]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 13 00:40:17.726657 sudo[2834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 13 00:40:17.779787 kubelet[2820]: I0313 00:40:17.779327 2820 server.go:1262] "Started kubelet" Mar 13 00:40:17.789567 kubelet[2820]: I0313 00:40:17.789535 2820 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:40:17.795849 kubelet[2820]: E0313 00:40:17.795814 2820 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:40:17.797808 kubelet[2820]: I0313 00:40:17.796190 2820 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:40:17.799271 kubelet[2820]: I0313 00:40:17.799236 2820 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:40:17.801068 kubelet[2820]: I0313 00:40:17.796268 2820 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:40:17.801284 kubelet[2820]: I0313 00:40:17.801224 2820 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:40:17.803206 kubelet[2820]: I0313 00:40:17.803158 2820 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:40:17.805754 kubelet[2820]: I0313 00:40:17.804720 2820 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:40:17.807990 kubelet[2820]: I0313 00:40:17.807966 2820 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:40:17.811799 kubelet[2820]: I0313 00:40:17.808541 2820 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:40:17.812817 kubelet[2820]: I0313 00:40:17.812120 2820 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:40:17.816853 kubelet[2820]: I0313 00:40:17.816167 2820 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:40:17.822750 kubelet[2820]: I0313 00:40:17.822645 2820 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:40:17.827555 kubelet[2820]: I0313 00:40:17.827529 2820 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:40:17.911352 kubelet[2820]: I0313 00:40:17.911182 2820 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:40:17.926274 kubelet[2820]: I0313 00:40:17.926218 2820 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:40:17.926274 kubelet[2820]: I0313 00:40:17.926274 2820 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:40:17.926494 kubelet[2820]: I0313 00:40:17.926311 2820 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:40:17.926494 kubelet[2820]: E0313 00:40:17.926381 2820 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:40:18.006608 kubelet[2820]: I0313 00:40:18.006475 2820 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:40:18.006608 kubelet[2820]: I0313 00:40:18.006525 2820 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:40:18.006608 kubelet[2820]: I0313 00:40:18.006596 2820 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:40:18.006922 kubelet[2820]: I0313 00:40:18.006841 2820 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 00:40:18.006922 kubelet[2820]: I0313 00:40:18.006858 2820 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 00:40:18.006922 kubelet[2820]: I0313 00:40:18.006883 2820 policy_none.go:49] "None policy: Start" Mar 13 00:40:18.006922 kubelet[2820]: I0313 00:40:18.006899 2820 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:40:18.006922 kubelet[2820]: I0313 00:40:18.006915 2820 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:40:18.007192 kubelet[2820]: I0313 00:40:18.007147 2820 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 13 00:40:18.007192 kubelet[2820]: I0313 00:40:18.007163 2820 policy_none.go:47] "Start" Mar 13 00:40:18.020025 kubelet[2820]: E0313 00:40:18.019975 2820 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:40:18.020492 kubelet[2820]: I0313 00:40:18.020262 2820 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:40:18.020492 kubelet[2820]: I0313 00:40:18.020283 2820 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:40:18.023015 kubelet[2820]: I0313 00:40:18.022982 2820 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:40:18.025770 kubelet[2820]: E0313 00:40:18.025712 2820 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:40:18.035040 kubelet[2820]: I0313 00:40:18.033182 2820 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:18.037483 kubelet[2820]: I0313 00:40:18.036224 2820 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:18.041690 kubelet[2820]: I0313 00:40:18.041528 2820 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:18.070854 kubelet[2820]: I0313 00:40:18.068605 2820 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 13 00:40:18.077540 kubelet[2820]: I0313 00:40:18.077476 2820 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 13 00:40:18.079379 kubelet[2820]: I0313 00:40:18.078618 2820 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 13 00:40:18.116091 kubelet[2820]: I0313 00:40:18.116004 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc4a2567dc070343a9068d099c918271-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"fc4a2567dc070343a9068d099c918271\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:18.116091 kubelet[2820]: I0313 00:40:18.116075 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/485edd3c6dadd5db6059a9f95ede9b18-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"485edd3c6dadd5db6059a9f95ede9b18\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:18.116091 kubelet[2820]: I0313 00:40:18.116103 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/485edd3c6dadd5db6059a9f95ede9b18-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"485edd3c6dadd5db6059a9f95ede9b18\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:18.116537 kubelet[2820]: I0313 00:40:18.116135 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/485edd3c6dadd5db6059a9f95ede9b18-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"485edd3c6dadd5db6059a9f95ede9b18\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:18.116537 kubelet[2820]: I0313 00:40:18.116165 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/485edd3c6dadd5db6059a9f95ede9b18-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"485edd3c6dadd5db6059a9f95ede9b18\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:18.116537 kubelet[2820]: I0313 00:40:18.116190 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/485edd3c6dadd5db6059a9f95ede9b18-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"485edd3c6dadd5db6059a9f95ede9b18\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:18.116537 kubelet[2820]: I0313 00:40:18.116217 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/739f4adc468fea4ceec2008039f78ce1-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"739f4adc468fea4ceec2008039f78ce1\") " pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:18.116910 kubelet[2820]: I0313 00:40:18.116255 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc4a2567dc070343a9068d099c918271-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"fc4a2567dc070343a9068d099c918271\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:18.116910 kubelet[2820]: I0313 00:40:18.116283 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc4a2567dc070343a9068d099c918271-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" (UID: \"fc4a2567dc070343a9068d099c918271\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:18.153983 kubelet[2820]: I0313 00:40:18.153941 2820 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:18.172206 kubelet[2820]: I0313 00:40:18.172100 2820 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:18.172755 kubelet[2820]: I0313 00:40:18.172661 2820 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" Mar 13 00:40:18.468552 sudo[2834]: pam_unix(sudo:session): session closed for user root Mar 13 00:40:18.721850 kubelet[2820]: I0313 00:40:18.721658 2820 apiserver.go:52] "Watching apiserver" Mar 13 00:40:18.812350 kubelet[2820]: I0313 00:40:18.812295 2820 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:40:19.051000 kubelet[2820]: I0313 00:40:19.050831 2820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" podStartSLOduration=1.050803519 podStartE2EDuration="1.050803519s" podCreationTimestamp="2026-03-13 00:40:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:40:19.031878597 +0000 UTC m=+1.449178305" watchObservedRunningTime="2026-03-13 00:40:19.050803519 +0000 UTC m=+1.468103218" Mar 13 00:40:19.067879 kubelet[2820]: I0313 00:40:19.067810 2820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" podStartSLOduration=1.067787611 podStartE2EDuration="1.067787611s" podCreationTimestamp="2026-03-13 00:40:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:40:19.051520818 +0000 UTC m=+1.468820526" watchObservedRunningTime="2026-03-13 00:40:19.067787611 +0000 UTC m=+1.485087312" Mar 13 00:40:19.090761 kubelet[2820]: I0313 00:40:19.089941 2820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" podStartSLOduration=1.089900722 podStartE2EDuration="1.089900722s" podCreationTimestamp="2026-03-13 00:40:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:40:19.068238159 +0000 UTC m=+1.485537868" watchObservedRunningTime="2026-03-13 00:40:19.089900722 +0000 UTC m=+1.507200431" Mar 13 00:40:20.791928 sudo[1830]: pam_unix(sudo:session): session closed for user root Mar 13 00:40:20.828524 sshd[1829]: Connection closed by 20.161.92.111 port 34114 Mar 13 00:40:20.829407 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:20.836033 systemd[1]: sshd@6-10.128.0.75:22-20.161.92.111:34114.service: Deactivated successfully. Mar 13 00:40:20.843041 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:40:20.843367 systemd[1]: session-7.scope: Consumed 8.213s CPU time, 274.3M memory peak. Mar 13 00:40:20.846615 systemd-logind[1519]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:40:20.849175 systemd-logind[1519]: Removed session 7. Mar 13 00:40:22.101255 kubelet[2820]: I0313 00:40:22.101209 2820 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:40:22.103125 containerd[1541]: time="2026-03-13T00:40:22.102149123Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:40:22.103570 kubelet[2820]: I0313 00:40:22.102644 2820 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:40:37.925277 systemd[1]: Started sshd@11-10.128.0.75:22-112.196.70.142:41016.service - OpenSSH per-connection server daemon (112.196.70.142:41016). Mar 13 00:40:39.456956 sshd[2894]: Invalid user sandro from 112.196.70.142 port 41016 Mar 13 00:40:39.759622 sshd[2894]: Received disconnect from 112.196.70.142 port 41016:11: Bye Bye [preauth] Mar 13 00:40:39.759622 sshd[2894]: Disconnected from invalid user sandro 112.196.70.142 port 41016 [preauth] Mar 13 00:40:39.764325 systemd[1]: sshd@11-10.128.0.75:22-112.196.70.142:41016.service: Deactivated successfully. Mar 13 00:41:04.994019 systemd[1]: Created slice kubepods-besteffort-pod678ebb65_664c_4a03_8966_dd30e7ca0b08.slice - libcontainer container kubepods-besteffort-pod678ebb65_664c_4a03_8966_dd30e7ca0b08.slice. Mar 13 00:41:05.000136 kubelet[2820]: E0313 00:41:05.000068 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Mar 13 00:41:05.001320 kubelet[2820]: E0313 00:41:05.001058 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"hubble-server-certs\"" type="*v1.Secret" Mar 13 00:41:05.018233 systemd[1]: Created slice kubepods-besteffort-poddcdf42cb_e867_49f0_9c1a_cbdd31027443.slice - libcontainer container kubepods-besteffort-poddcdf42cb_e867_49f0_9c1a_cbdd31027443.slice. Mar 13 00:41:05.041192 systemd[1]: Created slice kubepods-burstable-pod04c74789_40bc_417d_adcb_3fe88f9ed86a.slice - libcontainer container kubepods-burstable-pod04c74789_40bc_417d_adcb_3fe88f9ed86a.slice. Mar 13 00:41:05.044759 kubelet[2820]: I0313 00:41:05.044640 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xht5\" (UniqueName: \"kubernetes.io/projected/dcdf42cb-e867-49f0-9c1a-cbdd31027443-kube-api-access-8xht5\") pod \"cilium-operator-6f9c7c5859-ldfr6\" (UID: \"dcdf42cb-e867-49f0-9c1a-cbdd31027443\") " pod="kube-system/cilium-operator-6f9c7c5859-ldfr6" Mar 13 00:41:05.044938 kubelet[2820]: I0313 00:41:05.044766 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-xtables-lock\") pod \"cilium-k9sbm\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " pod="kube-system/cilium-k9sbm" Mar 13 00:41:05.044938 kubelet[2820]: I0313 00:41:05.044798 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dcdf42cb-e867-49f0-9c1a-cbdd31027443-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-ldfr6\" (UID: \"dcdf42cb-e867-49f0-9c1a-cbdd31027443\") " pod="kube-system/cilium-operator-6f9c7c5859-ldfr6" Mar 13 00:41:05.044938 kubelet[2820]: I0313 00:41:05.044825 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-host-proc-sys-kernel\") pod \"cilium-k9sbm\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " pod="kube-system/cilium-k9sbm" Mar 13 00:41:05.044938 kubelet[2820]: I0313 00:41:05.044848 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/678ebb65-664c-4a03-8966-dd30e7ca0b08-kube-proxy\") pod \"kube-proxy-6p9zh\" (UID: \"678ebb65-664c-4a03-8966-dd30e7ca0b08\") " pod="kube-system/kube-proxy-6p9zh" Mar 13 00:41:05.044938 kubelet[2820]: I0313 00:41:05.044872 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04c74789-40bc-417d-adcb-3fe88f9ed86a-clustermesh-secrets\") pod \"cilium-k9sbm\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " pod="kube-system/cilium-k9sbm" Mar 13 00:41:05.045225 kubelet[2820]: I0313 00:41:05.044901 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6sqh\" (UniqueName: \"kubernetes.io/projected/678ebb65-664c-4a03-8966-dd30e7ca0b08-kube-api-access-r6sqh\") pod \"kube-proxy-6p9zh\" (UID: \"678ebb65-664c-4a03-8966-dd30e7ca0b08\") " pod="kube-system/kube-proxy-6p9zh" Mar 13 00:41:05.045225 kubelet[2820]: I0313 00:41:05.044924 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-cilium-run\") pod \"cilium-k9sbm\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " pod="kube-system/cilium-k9sbm" Mar 13 00:41:05.045225 kubelet[2820]: I0313 00:41:05.044952 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-hostproc\") pod \"cilium-k9sbm\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " pod="kube-system/cilium-k9sbm" Mar 13 00:41:05.045225 kubelet[2820]: I0313 00:41:05.044978 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-cilium-cgroup\") pod \"cilium-k9sbm\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " pod="kube-system/cilium-k9sbm" Mar 13 00:41:05.045225 kubelet[2820]: I0313 00:41:05.045002 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-cni-path\") pod \"cilium-k9sbm\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " pod="kube-system/cilium-k9sbm" Mar 13 00:41:05.045225 kubelet[2820]: I0313 00:41:05.045029 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-etc-cni-netd\") pod \"cilium-k9sbm\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " pod="kube-system/cilium-k9sbm" Mar 13 00:41:05.045517 kubelet[2820]: I0313 00:41:05.045054 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-lib-modules\") pod \"cilium-k9sbm\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " pod="kube-system/cilium-k9sbm" Mar 13 00:41:05.045517 kubelet[2820]: I0313 00:41:05.045082 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04c74789-40bc-417d-adcb-3fe88f9ed86a-cilium-config-path\") pod \"cilium-k9sbm\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " pod="kube-system/cilium-k9sbm" Mar 13 00:41:05.045517 kubelet[2820]: I0313 00:41:05.045109 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-host-proc-sys-net\") pod \"cilium-k9sbm\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " pod="kube-system/cilium-k9sbm" Mar 13 00:41:05.045517 kubelet[2820]: I0313 00:41:05.045140 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h62q2\" (UniqueName: \"kubernetes.io/projected/04c74789-40bc-417d-adcb-3fe88f9ed86a-kube-api-access-h62q2\") pod \"cilium-k9sbm\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " pod="kube-system/cilium-k9sbm" Mar 13 00:41:05.045517 kubelet[2820]: I0313 00:41:05.045166 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-bpf-maps\") pod \"cilium-k9sbm\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " pod="kube-system/cilium-k9sbm" Mar 13 00:41:05.045517 kubelet[2820]: I0313 00:41:05.045195 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/678ebb65-664c-4a03-8966-dd30e7ca0b08-lib-modules\") pod \"kube-proxy-6p9zh\" (UID: \"678ebb65-664c-4a03-8966-dd30e7ca0b08\") " pod="kube-system/kube-proxy-6p9zh" Mar 13 00:41:05.045823 kubelet[2820]: I0313 00:41:05.045227 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04c74789-40bc-417d-adcb-3fe88f9ed86a-hubble-tls\") pod \"cilium-k9sbm\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " pod="kube-system/cilium-k9sbm" Mar 13 00:41:05.045823 kubelet[2820]: I0313 00:41:05.045254 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/678ebb65-664c-4a03-8966-dd30e7ca0b08-xtables-lock\") pod \"kube-proxy-6p9zh\" (UID: \"678ebb65-664c-4a03-8966-dd30e7ca0b08\") " pod="kube-system/kube-proxy-6p9zh" Mar 13 00:41:05.314660 containerd[1541]: time="2026-03-13T00:41:05.314335265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6p9zh,Uid:678ebb65-664c-4a03-8966-dd30e7ca0b08,Namespace:kube-system,Attempt:0,}" Mar 13 00:41:05.332649 containerd[1541]: time="2026-03-13T00:41:05.332578270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-ldfr6,Uid:dcdf42cb-e867-49f0-9c1a-cbdd31027443,Namespace:kube-system,Attempt:0,}" Mar 13 00:41:05.372716 containerd[1541]: time="2026-03-13T00:41:05.372360399Z" level=info msg="connecting to shim c283fffe2ecc6a1cbf2bcb635997edeb9b60ebee4436fb31c3973f6aa8a56900" address="unix:///run/containerd/s/e33e5cc872a32ba563fca0535340bd128ceadf3f8d33f66424d0d20a6624abe4" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:05.389069 containerd[1541]: time="2026-03-13T00:41:05.389002393Z" level=info msg="connecting to shim 2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168" address="unix:///run/containerd/s/3b31228290e2b7e511f33a66fffef5835889760ad65e23be1ae2d7b33b153a93" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:05.431095 systemd[1]: Started cri-containerd-c283fffe2ecc6a1cbf2bcb635997edeb9b60ebee4436fb31c3973f6aa8a56900.scope - libcontainer container c283fffe2ecc6a1cbf2bcb635997edeb9b60ebee4436fb31c3973f6aa8a56900. Mar 13 00:41:05.449058 systemd[1]: Started cri-containerd-2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168.scope - libcontainer container 2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168. Mar 13 00:41:05.504964 containerd[1541]: time="2026-03-13T00:41:05.504861191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6p9zh,Uid:678ebb65-664c-4a03-8966-dd30e7ca0b08,Namespace:kube-system,Attempt:0,} returns sandbox id \"c283fffe2ecc6a1cbf2bcb635997edeb9b60ebee4436fb31c3973f6aa8a56900\"" Mar 13 00:41:05.517356 containerd[1541]: time="2026-03-13T00:41:05.517291271Z" level=info msg="CreateContainer within sandbox \"c283fffe2ecc6a1cbf2bcb635997edeb9b60ebee4436fb31c3973f6aa8a56900\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:41:05.531622 containerd[1541]: time="2026-03-13T00:41:05.531540142Z" level=info msg="Container 62e5d2ddc7b96c81294095330bd229abdfad4080291b517772e46f8d691fc2b6: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:05.545220 containerd[1541]: time="2026-03-13T00:41:05.545161613Z" level=info msg="CreateContainer within sandbox \"c283fffe2ecc6a1cbf2bcb635997edeb9b60ebee4436fb31c3973f6aa8a56900\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"62e5d2ddc7b96c81294095330bd229abdfad4080291b517772e46f8d691fc2b6\"" Mar 13 00:41:05.546581 containerd[1541]: time="2026-03-13T00:41:05.546521544Z" level=info msg="StartContainer for \"62e5d2ddc7b96c81294095330bd229abdfad4080291b517772e46f8d691fc2b6\"" Mar 13 00:41:05.549864 containerd[1541]: time="2026-03-13T00:41:05.549784608Z" level=info msg="connecting to shim 62e5d2ddc7b96c81294095330bd229abdfad4080291b517772e46f8d691fc2b6" address="unix:///run/containerd/s/e33e5cc872a32ba563fca0535340bd128ceadf3f8d33f66424d0d20a6624abe4" protocol=ttrpc version=3 Mar 13 00:41:05.570176 containerd[1541]: time="2026-03-13T00:41:05.569568060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-ldfr6,Uid:dcdf42cb-e867-49f0-9c1a-cbdd31027443,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168\"" Mar 13 00:41:05.579438 containerd[1541]: time="2026-03-13T00:41:05.579381312Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 13 00:41:05.594649 systemd[1]: Started cri-containerd-62e5d2ddc7b96c81294095330bd229abdfad4080291b517772e46f8d691fc2b6.scope - libcontainer container 62e5d2ddc7b96c81294095330bd229abdfad4080291b517772e46f8d691fc2b6. Mar 13 00:41:05.704312 containerd[1541]: time="2026-03-13T00:41:05.704251181Z" level=info msg="StartContainer for \"62e5d2ddc7b96c81294095330bd229abdfad4080291b517772e46f8d691fc2b6\" returns successfully" Mar 13 00:41:06.141013 kubelet[2820]: I0313 00:41:06.139936 2820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6p9zh" podStartSLOduration=44.139909565 podStartE2EDuration="44.139909565s" podCreationTimestamp="2026-03-13 00:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:41:06.138129516 +0000 UTC m=+48.555429226" watchObservedRunningTime="2026-03-13 00:41:06.139909565 +0000 UTC m=+48.557209273" Mar 13 00:41:06.147764 kubelet[2820]: E0313 00:41:06.147470 2820 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 13 00:41:06.147764 kubelet[2820]: E0313 00:41:06.148070 2820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04c74789-40bc-417d-adcb-3fe88f9ed86a-clustermesh-secrets podName:04c74789-40bc-417d-adcb-3fe88f9ed86a nodeName:}" failed. No retries permitted until 2026-03-13 00:41:06.648023037 +0000 UTC m=+49.065322739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/04c74789-40bc-417d-adcb-3fe88f9ed86a-clustermesh-secrets") pod "cilium-k9sbm" (UID: "04c74789-40bc-417d-adcb-3fe88f9ed86a") : failed to sync secret cache: timed out waiting for the condition Mar 13 00:41:06.149974 kubelet[2820]: E0313 00:41:06.149767 2820 projected.go:266] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Mar 13 00:41:06.150263 kubelet[2820]: E0313 00:41:06.150238 2820 projected.go:196] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-k9sbm: failed to sync secret cache: timed out waiting for the condition Mar 13 00:41:06.151022 kubelet[2820]: E0313 00:41:06.150791 2820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/04c74789-40bc-417d-adcb-3fe88f9ed86a-hubble-tls podName:04c74789-40bc-417d-adcb-3fe88f9ed86a nodeName:}" failed. No retries permitted until 2026-03-13 00:41:06.650758528 +0000 UTC m=+49.068058235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/04c74789-40bc-417d-adcb-3fe88f9ed86a-hubble-tls") pod "cilium-k9sbm" (UID: "04c74789-40bc-417d-adcb-3fe88f9ed86a") : failed to sync secret cache: timed out waiting for the condition Mar 13 00:41:06.536065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3060961236.mount: Deactivated successfully. Mar 13 00:41:06.865488 containerd[1541]: time="2026-03-13T00:41:06.865423170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k9sbm,Uid:04c74789-40bc-417d-adcb-3fe88f9ed86a,Namespace:kube-system,Attempt:0,}" Mar 13 00:41:06.912203 containerd[1541]: time="2026-03-13T00:41:06.912123642Z" level=info msg="connecting to shim c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3" address="unix:///run/containerd/s/d6c63487a3fd764e40744b27c9852aa95af95c72832ee0c14a6350758845babd" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:06.986556 systemd[1]: Started cri-containerd-c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3.scope - libcontainer container c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3. Mar 13 00:41:07.103658 containerd[1541]: time="2026-03-13T00:41:07.103569478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k9sbm,Uid:04c74789-40bc-417d-adcb-3fe88f9ed86a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\"" Mar 13 00:41:07.767232 containerd[1541]: time="2026-03-13T00:41:07.766991697Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:07.768928 containerd[1541]: time="2026-03-13T00:41:07.768852976Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 13 00:41:07.770768 containerd[1541]: time="2026-03-13T00:41:07.770636495Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:07.772778 containerd[1541]: time="2026-03-13T00:41:07.772517460Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.193068277s" Mar 13 00:41:07.772778 containerd[1541]: time="2026-03-13T00:41:07.772566644Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 13 00:41:07.774190 containerd[1541]: time="2026-03-13T00:41:07.774149186Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 13 00:41:07.780633 containerd[1541]: time="2026-03-13T00:41:07.780556758Z" level=info msg="CreateContainer within sandbox \"2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 13 00:41:07.800246 containerd[1541]: time="2026-03-13T00:41:07.800185824Z" level=info msg="Container 5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:07.811577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1514395800.mount: Deactivated successfully. Mar 13 00:41:07.816199 containerd[1541]: time="2026-03-13T00:41:07.816116025Z" level=info msg="CreateContainer within sandbox \"2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453\"" Mar 13 00:41:07.817288 containerd[1541]: time="2026-03-13T00:41:07.817214817Z" level=info msg="StartContainer for \"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453\"" Mar 13 00:41:07.819603 containerd[1541]: time="2026-03-13T00:41:07.818918297Z" level=info msg="connecting to shim 5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453" address="unix:///run/containerd/s/3b31228290e2b7e511f33a66fffef5835889760ad65e23be1ae2d7b33b153a93" protocol=ttrpc version=3 Mar 13 00:41:07.861258 systemd[1]: Started cri-containerd-5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453.scope - libcontainer container 5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453. Mar 13 00:41:07.939421 containerd[1541]: time="2026-03-13T00:41:07.939358939Z" level=info msg="StartContainer for \"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453\" returns successfully" Mar 13 00:41:08.178594 kubelet[2820]: I0313 00:41:08.178502 2820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-ldfr6" podStartSLOduration=42.977609923 podStartE2EDuration="45.178477338s" podCreationTimestamp="2026-03-13 00:40:23 +0000 UTC" firstStartedPulling="2026-03-13 00:41:05.573129929 +0000 UTC m=+47.990429633" lastFinishedPulling="2026-03-13 00:41:07.773997345 +0000 UTC m=+50.191297048" observedRunningTime="2026-03-13 00:41:08.17418721 +0000 UTC m=+50.591486919" watchObservedRunningTime="2026-03-13 00:41:08.178477338 +0000 UTC m=+50.595777047" Mar 13 00:41:08.868584 systemd[1]: Started sshd@12-10.128.0.75:22-20.161.92.111:43390.service - OpenSSH per-connection server daemon (20.161.92.111:43390). Mar 13 00:41:09.227558 sshd[3256]: Accepted publickey for core from 20.161.92.111 port 43390 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:41:09.229299 sshd-session[3256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:09.245245 systemd-logind[1519]: New session 8 of user core. Mar 13 00:41:09.252992 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:41:09.892788 sshd[3259]: Connection closed by 20.161.92.111 port 43390 Mar 13 00:41:09.893903 sshd-session[3256]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:09.906713 systemd[1]: sshd@12-10.128.0.75:22-20.161.92.111:43390.service: Deactivated successfully. Mar 13 00:41:09.912939 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:41:09.925499 systemd-logind[1519]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:41:09.935411 systemd-logind[1519]: Removed session 8. Mar 13 00:41:10.066189 systemd[1]: Started sshd@13-10.128.0.75:22-160.187.240.90:54674.service - OpenSSH per-connection server daemon (160.187.240.90:54674). Mar 13 00:41:11.238765 sshd[3273]: Invalid user ais from 160.187.240.90 port 54674 Mar 13 00:41:11.458019 sshd[3273]: Received disconnect from 160.187.240.90 port 54674:11: Bye Bye [preauth] Mar 13 00:41:11.458371 sshd[3273]: Disconnected from invalid user ais 160.187.240.90 port 54674 [preauth] Mar 13 00:41:11.462616 systemd[1]: sshd@13-10.128.0.75:22-160.187.240.90:54674.service: Deactivated successfully. Mar 13 00:41:14.945270 systemd[1]: Started sshd@14-10.128.0.75:22-20.161.92.111:36068.service - OpenSSH per-connection server daemon (20.161.92.111:36068). Mar 13 00:41:15.219246 sshd[3284]: Accepted publickey for core from 20.161.92.111 port 36068 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:41:15.222639 sshd-session[3284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:15.236243 systemd-logind[1519]: New session 9 of user core. Mar 13 00:41:15.242403 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:41:15.500616 sshd[3287]: Connection closed by 20.161.92.111 port 36068 Mar 13 00:41:15.502319 sshd-session[3284]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:15.516893 systemd[1]: sshd@14-10.128.0.75:22-20.161.92.111:36068.service: Deactivated successfully. Mar 13 00:41:15.522181 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:41:15.524883 systemd-logind[1519]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:41:15.528082 systemd-logind[1519]: Removed session 9. Mar 13 00:41:15.551290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount682017533.mount: Deactivated successfully. Mar 13 00:41:18.975894 containerd[1541]: time="2026-03-13T00:41:18.975814179Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:18.977859 containerd[1541]: time="2026-03-13T00:41:18.977758272Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 13 00:41:18.980277 containerd[1541]: time="2026-03-13T00:41:18.980114592Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:18.983284 containerd[1541]: time="2026-03-13T00:41:18.982704463Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.208501493s" Mar 13 00:41:18.983284 containerd[1541]: time="2026-03-13T00:41:18.982881288Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 13 00:41:18.992148 containerd[1541]: time="2026-03-13T00:41:18.992057328Z" level=info msg="CreateContainer within sandbox \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 13 00:41:19.009948 containerd[1541]: time="2026-03-13T00:41:19.007894843Z" level=info msg="Container e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:19.022120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1682649488.mount: Deactivated successfully. Mar 13 00:41:19.034507 containerd[1541]: time="2026-03-13T00:41:19.034450450Z" level=info msg="CreateContainer within sandbox \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809\"" Mar 13 00:41:19.041267 containerd[1541]: time="2026-03-13T00:41:19.041203192Z" level=info msg="StartContainer for \"e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809\"" Mar 13 00:41:19.048388 containerd[1541]: time="2026-03-13T00:41:19.048271409Z" level=info msg="connecting to shim e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809" address="unix:///run/containerd/s/d6c63487a3fd764e40744b27c9852aa95af95c72832ee0c14a6350758845babd" protocol=ttrpc version=3 Mar 13 00:41:19.084212 systemd[1]: Started cri-containerd-e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809.scope - libcontainer container e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809. Mar 13 00:41:19.148633 containerd[1541]: time="2026-03-13T00:41:19.148556600Z" level=info msg="StartContainer for \"e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809\" returns successfully" Mar 13 00:41:19.169166 systemd[1]: cri-containerd-e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809.scope: Deactivated successfully. Mar 13 00:41:19.176098 containerd[1541]: time="2026-03-13T00:41:19.176018124Z" level=info msg="received container exit event container_id:\"e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809\" id:\"e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809\" pid:3339 exited_at:{seconds:1773362479 nanos:175179590}" Mar 13 00:41:19.223357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809-rootfs.mount: Deactivated successfully. Mar 13 00:41:19.975192 systemd[1]: Started sshd@15-10.128.0.75:22-116.203.152.173:56040.service - OpenSSH per-connection server daemon (116.203.152.173:56040). Mar 13 00:41:20.544543 systemd[1]: Started sshd@16-10.128.0.75:22-20.161.92.111:51648.service - OpenSSH per-connection server daemon (20.161.92.111:51648). Mar 13 00:41:20.686794 sshd[3373]: Invalid user frontend from 116.203.152.173 port 56040 Mar 13 00:41:20.782941 sshd[3379]: Accepted publickey for core from 20.161.92.111 port 51648 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:41:20.784688 sshd-session[3379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:20.793242 systemd-logind[1519]: New session 10 of user core. Mar 13 00:41:20.807121 sshd[3373]: Received disconnect from 116.203.152.173 port 56040:11: Bye Bye [preauth] Mar 13 00:41:20.807121 sshd[3373]: Disconnected from invalid user frontend 116.203.152.173 port 56040 [preauth] Mar 13 00:41:20.809142 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:41:20.814360 systemd[1]: sshd@15-10.128.0.75:22-116.203.152.173:56040.service: Deactivated successfully. Mar 13 00:41:21.050443 sshd[3384]: Connection closed by 20.161.92.111 port 51648 Mar 13 00:41:21.051614 sshd-session[3379]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:21.061848 systemd[1]: sshd@16-10.128.0.75:22-20.161.92.111:51648.service: Deactivated successfully. Mar 13 00:41:21.065898 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:41:21.070384 systemd-logind[1519]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:41:21.075350 systemd-logind[1519]: Removed session 10. Mar 13 00:41:22.276773 containerd[1541]: time="2026-03-13T00:41:22.275654385Z" level=info msg="CreateContainer within sandbox \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 13 00:41:22.298774 containerd[1541]: time="2026-03-13T00:41:22.297980387Z" level=info msg="Container 9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:22.319032 containerd[1541]: time="2026-03-13T00:41:22.318970604Z" level=info msg="CreateContainer within sandbox \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678\"" Mar 13 00:41:22.320940 containerd[1541]: time="2026-03-13T00:41:22.320153595Z" level=info msg="StartContainer for \"9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678\"" Mar 13 00:41:22.324360 containerd[1541]: time="2026-03-13T00:41:22.324285334Z" level=info msg="connecting to shim 9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678" address="unix:///run/containerd/s/d6c63487a3fd764e40744b27c9852aa95af95c72832ee0c14a6350758845babd" protocol=ttrpc version=3 Mar 13 00:41:22.373149 systemd[1]: Started cri-containerd-9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678.scope - libcontainer container 9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678. Mar 13 00:41:22.439105 containerd[1541]: time="2026-03-13T00:41:22.439015436Z" level=info msg="StartContainer for \"9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678\" returns successfully" Mar 13 00:41:22.473035 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:41:22.473462 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:41:22.473901 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:41:22.480127 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:41:22.485231 containerd[1541]: time="2026-03-13T00:41:22.485008728Z" level=info msg="received container exit event container_id:\"9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678\" id:\"9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678\" pid:3410 exited_at:{seconds:1773362482 nanos:484401906}" Mar 13 00:41:22.488598 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:41:22.489472 systemd[1]: cri-containerd-9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678.scope: Deactivated successfully. Mar 13 00:41:22.538170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:41:23.282532 containerd[1541]: time="2026-03-13T00:41:23.282473804Z" level=info msg="CreateContainer within sandbox \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 13 00:41:23.300848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678-rootfs.mount: Deactivated successfully. Mar 13 00:41:23.314651 containerd[1541]: time="2026-03-13T00:41:23.314563466Z" level=info msg="Container 8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:23.328538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount936343733.mount: Deactivated successfully. Mar 13 00:41:23.338682 containerd[1541]: time="2026-03-13T00:41:23.338600677Z" level=info msg="CreateContainer within sandbox \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a\"" Mar 13 00:41:23.340339 containerd[1541]: time="2026-03-13T00:41:23.339339179Z" level=info msg="StartContainer for \"8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a\"" Mar 13 00:41:23.342620 containerd[1541]: time="2026-03-13T00:41:23.342463559Z" level=info msg="connecting to shim 8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a" address="unix:///run/containerd/s/d6c63487a3fd764e40744b27c9852aa95af95c72832ee0c14a6350758845babd" protocol=ttrpc version=3 Mar 13 00:41:23.383053 systemd[1]: Started cri-containerd-8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a.scope - libcontainer container 8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a. Mar 13 00:41:23.494780 containerd[1541]: time="2026-03-13T00:41:23.494697157Z" level=info msg="StartContainer for \"8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a\" returns successfully" Mar 13 00:41:23.498612 systemd[1]: cri-containerd-8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a.scope: Deactivated successfully. Mar 13 00:41:23.503794 containerd[1541]: time="2026-03-13T00:41:23.503672050Z" level=info msg="received container exit event container_id:\"8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a\" id:\"8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a\" pid:3457 exited_at:{seconds:1773362483 nanos:502623945}" Mar 13 00:41:23.550282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a-rootfs.mount: Deactivated successfully. Mar 13 00:41:24.291149 containerd[1541]: time="2026-03-13T00:41:24.291072148Z" level=info msg="CreateContainer within sandbox \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 13 00:41:24.308523 containerd[1541]: time="2026-03-13T00:41:24.307244502Z" level=info msg="Container 0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:24.331022 containerd[1541]: time="2026-03-13T00:41:24.330948912Z" level=info msg="CreateContainer within sandbox \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4\"" Mar 13 00:41:24.333103 containerd[1541]: time="2026-03-13T00:41:24.333067802Z" level=info msg="StartContainer for \"0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4\"" Mar 13 00:41:24.335222 containerd[1541]: time="2026-03-13T00:41:24.335164566Z" level=info msg="connecting to shim 0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4" address="unix:///run/containerd/s/d6c63487a3fd764e40744b27c9852aa95af95c72832ee0c14a6350758845babd" protocol=ttrpc version=3 Mar 13 00:41:24.374048 systemd[1]: Started cri-containerd-0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4.scope - libcontainer container 0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4. Mar 13 00:41:24.429618 systemd[1]: cri-containerd-0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4.scope: Deactivated successfully. Mar 13 00:41:24.433176 containerd[1541]: time="2026-03-13T00:41:24.432853154Z" level=info msg="received container exit event container_id:\"0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4\" id:\"0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4\" pid:3497 exited_at:{seconds:1773362484 nanos:432084944}" Mar 13 00:41:24.455672 containerd[1541]: time="2026-03-13T00:41:24.455619642Z" level=info msg="StartContainer for \"0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4\" returns successfully" Mar 13 00:41:24.484032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4-rootfs.mount: Deactivated successfully. Mar 13 00:41:25.015720 systemd[1]: sshd@7-10.128.0.75:22-218.17.21.99:37408.service: Deactivated successfully. Mar 13 00:41:25.300354 containerd[1541]: time="2026-03-13T00:41:25.300190529Z" level=info msg="CreateContainer within sandbox \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 13 00:41:25.325423 containerd[1541]: time="2026-03-13T00:41:25.324864116Z" level=info msg="Container f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:25.341023 containerd[1541]: time="2026-03-13T00:41:25.340961955Z" level=info msg="CreateContainer within sandbox \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0\"" Mar 13 00:41:25.342465 containerd[1541]: time="2026-03-13T00:41:25.342386296Z" level=info msg="StartContainer for \"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0\"" Mar 13 00:41:25.345465 containerd[1541]: time="2026-03-13T00:41:25.345333445Z" level=info msg="connecting to shim f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0" address="unix:///run/containerd/s/d6c63487a3fd764e40744b27c9852aa95af95c72832ee0c14a6350758845babd" protocol=ttrpc version=3 Mar 13 00:41:25.382070 systemd[1]: Started cri-containerd-f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0.scope - libcontainer container f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0. Mar 13 00:41:25.448573 containerd[1541]: time="2026-03-13T00:41:25.448433271Z" level=info msg="StartContainer for \"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0\" returns successfully" Mar 13 00:41:25.614496 kubelet[2820]: I0313 00:41:25.614245 2820 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 13 00:41:25.681875 systemd[1]: Created slice kubepods-burstable-podbbc4f76d_51f4_49bd_868e_f3c8f9362191.slice - libcontainer container kubepods-burstable-podbbc4f76d_51f4_49bd_868e_f3c8f9362191.slice. Mar 13 00:41:25.698719 systemd[1]: Created slice kubepods-burstable-pod78b77736_c4e5_41ca_a720_3a4a3ddae154.slice - libcontainer container kubepods-burstable-pod78b77736_c4e5_41ca_a720_3a4a3ddae154.slice. Mar 13 00:41:25.724756 kubelet[2820]: I0313 00:41:25.724623 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fjbg\" (UniqueName: \"kubernetes.io/projected/78b77736-c4e5-41ca-a720-3a4a3ddae154-kube-api-access-5fjbg\") pod \"coredns-66bc5c9577-r9rrr\" (UID: \"78b77736-c4e5-41ca-a720-3a4a3ddae154\") " pod="kube-system/coredns-66bc5c9577-r9rrr" Mar 13 00:41:25.724756 kubelet[2820]: I0313 00:41:25.724683 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78b77736-c4e5-41ca-a720-3a4a3ddae154-config-volume\") pod \"coredns-66bc5c9577-r9rrr\" (UID: \"78b77736-c4e5-41ca-a720-3a4a3ddae154\") " pod="kube-system/coredns-66bc5c9577-r9rrr" Mar 13 00:41:25.726117 kubelet[2820]: I0313 00:41:25.724723 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbc4f76d-51f4-49bd-868e-f3c8f9362191-config-volume\") pod \"coredns-66bc5c9577-rhmv4\" (UID: \"bbc4f76d-51f4-49bd-868e-f3c8f9362191\") " pod="kube-system/coredns-66bc5c9577-rhmv4" Mar 13 00:41:25.726117 kubelet[2820]: I0313 00:41:25.726020 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bs86\" (UniqueName: \"kubernetes.io/projected/bbc4f76d-51f4-49bd-868e-f3c8f9362191-kube-api-access-8bs86\") pod \"coredns-66bc5c9577-rhmv4\" (UID: \"bbc4f76d-51f4-49bd-868e-f3c8f9362191\") " pod="kube-system/coredns-66bc5c9577-rhmv4" Mar 13 00:41:25.997781 containerd[1541]: time="2026-03-13T00:41:25.997585136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rhmv4,Uid:bbc4f76d-51f4-49bd-868e-f3c8f9362191,Namespace:kube-system,Attempt:0,}" Mar 13 00:41:26.011481 containerd[1541]: time="2026-03-13T00:41:26.011387780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-r9rrr,Uid:78b77736-c4e5-41ca-a720-3a4a3ddae154,Namespace:kube-system,Attempt:0,}" Mar 13 00:41:26.112549 systemd[1]: Started sshd@17-10.128.0.75:22-20.161.92.111:51656.service - OpenSSH per-connection server daemon (20.161.92.111:51656). Mar 13 00:41:26.359509 kubelet[2820]: I0313 00:41:26.359324 2820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k9sbm" podStartSLOduration=52.482403381 podStartE2EDuration="1m4.359128511s" podCreationTimestamp="2026-03-13 00:40:22 +0000 UTC" firstStartedPulling="2026-03-13 00:41:07.107310979 +0000 UTC m=+49.524610675" lastFinishedPulling="2026-03-13 00:41:18.984036116 +0000 UTC m=+61.401335805" observedRunningTime="2026-03-13 00:41:26.358382993 +0000 UTC m=+68.775682726" watchObservedRunningTime="2026-03-13 00:41:26.359128511 +0000 UTC m=+68.776428220" Mar 13 00:41:26.396598 sshd[3627]: Accepted publickey for core from 20.161.92.111 port 51656 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:41:26.399177 sshd-session[3627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:26.407483 systemd-logind[1519]: New session 11 of user core. Mar 13 00:41:26.417179 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:41:26.607401 sshd[3662]: Connection closed by 20.161.92.111 port 51656 Mar 13 00:41:26.608416 sshd-session[3627]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:26.615358 systemd-logind[1519]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:41:26.616161 systemd[1]: sshd@17-10.128.0.75:22-20.161.92.111:51656.service: Deactivated successfully. Mar 13 00:41:26.620719 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:41:26.624660 systemd-logind[1519]: Removed session 11. Mar 13 00:41:28.094724 systemd-networkd[1425]: cilium_host: Link UP Mar 13 00:41:28.096102 systemd-networkd[1425]: cilium_net: Link UP Mar 13 00:41:28.096887 systemd-networkd[1425]: cilium_net: Gained carrier Mar 13 00:41:28.097466 systemd-networkd[1425]: cilium_host: Gained carrier Mar 13 00:41:28.272445 systemd-networkd[1425]: cilium_vxlan: Link UP Mar 13 00:41:28.272458 systemd-networkd[1425]: cilium_vxlan: Gained carrier Mar 13 00:41:28.580863 kernel: NET: Registered PF_ALG protocol family Mar 13 00:41:28.956918 systemd-networkd[1425]: cilium_net: Gained IPv6LL Mar 13 00:41:29.022255 systemd-networkd[1425]: cilium_host: Gained IPv6LL Mar 13 00:41:29.578427 systemd-networkd[1425]: lxc_health: Link UP Mar 13 00:41:29.584259 systemd-networkd[1425]: lxc_health: Gained carrier Mar 13 00:41:30.095515 systemd-networkd[1425]: lxcf25323c3e108: Link UP Mar 13 00:41:30.106979 kernel: eth0: renamed from tmpa4501 Mar 13 00:41:30.133235 kernel: eth0: renamed from tmp8eefa Mar 13 00:41:30.127925 systemd-networkd[1425]: lxcf25323c3e108: Gained carrier Mar 13 00:41:30.128241 systemd-networkd[1425]: lxc9373729a5fdc: Link UP Mar 13 00:41:30.140457 systemd-networkd[1425]: lxc9373729a5fdc: Gained carrier Mar 13 00:41:30.173387 systemd-networkd[1425]: cilium_vxlan: Gained IPv6LL Mar 13 00:41:30.877067 systemd-networkd[1425]: lxc_health: Gained IPv6LL Mar 13 00:41:31.661029 systemd[1]: Started sshd@18-10.128.0.75:22-20.161.92.111:52368.service - OpenSSH per-connection server daemon (20.161.92.111:52368). Mar 13 00:41:31.772981 systemd-networkd[1425]: lxcf25323c3e108: Gained IPv6LL Mar 13 00:41:31.915773 sshd[4039]: Accepted publickey for core from 20.161.92.111 port 52368 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:41:31.919409 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:31.936096 systemd-logind[1519]: New session 12 of user core. Mar 13 00:41:31.943135 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:41:32.092951 systemd-networkd[1425]: lxc9373729a5fdc: Gained IPv6LL Mar 13 00:41:32.178818 sshd[4042]: Connection closed by 20.161.92.111 port 52368 Mar 13 00:41:32.179754 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:32.192279 systemd[1]: sshd@18-10.128.0.75:22-20.161.92.111:52368.service: Deactivated successfully. Mar 13 00:41:32.198786 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:41:32.201788 systemd-logind[1519]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:41:32.205513 systemd-logind[1519]: Removed session 12. Mar 13 00:41:32.232159 systemd[1]: Started sshd@19-10.128.0.75:22-20.161.92.111:52376.service - OpenSSH per-connection server daemon (20.161.92.111:52376). Mar 13 00:41:32.497403 sshd[4055]: Accepted publickey for core from 20.161.92.111 port 52376 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:41:32.500432 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:32.512537 systemd-logind[1519]: New session 13 of user core. Mar 13 00:41:32.518601 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:41:32.823382 sshd[4060]: Connection closed by 20.161.92.111 port 52376 Mar 13 00:41:32.825117 sshd-session[4055]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:32.839469 systemd[1]: sshd@19-10.128.0.75:22-20.161.92.111:52376.service: Deactivated successfully. Mar 13 00:41:32.848318 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:41:32.850585 systemd-logind[1519]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:41:32.871155 systemd-logind[1519]: Removed session 13. Mar 13 00:41:32.874093 systemd[1]: Started sshd@20-10.128.0.75:22-20.161.92.111:52386.service - OpenSSH per-connection server daemon (20.161.92.111:52386). Mar 13 00:41:33.167148 sshd[4069]: Accepted publickey for core from 20.161.92.111 port 52386 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:41:33.169516 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:33.184083 systemd-logind[1519]: New session 14 of user core. Mar 13 00:41:33.190045 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:41:33.434057 sshd[4072]: Connection closed by 20.161.92.111 port 52386 Mar 13 00:41:33.436964 sshd-session[4069]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:33.446305 systemd[1]: sshd@20-10.128.0.75:22-20.161.92.111:52386.service: Deactivated successfully. Mar 13 00:41:33.446634 systemd-logind[1519]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:41:33.453168 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:41:33.459128 systemd-logind[1519]: Removed session 14. Mar 13 00:41:33.829156 systemd[1]: Started sshd@21-10.128.0.75:22-183.94.33.245:51070.service - OpenSSH per-connection server daemon (183.94.33.245:51070). Mar 13 00:41:34.132669 ntpd[1599]: Listen normally on 6 cilium_host 192.168.0.245:123 Mar 13 00:41:34.133793 ntpd[1599]: 13 Mar 00:41:34 ntpd[1599]: Listen normally on 6 cilium_host 192.168.0.245:123 Mar 13 00:41:34.133793 ntpd[1599]: 13 Mar 00:41:34 ntpd[1599]: Listen normally on 7 cilium_net [fe80::ecb6:82ff:fe91:44cd%4]:123 Mar 13 00:41:34.133793 ntpd[1599]: 13 Mar 00:41:34 ntpd[1599]: Listen normally on 8 cilium_host [fe80::487c:33ff:fea3:fa6a%5]:123 Mar 13 00:41:34.133793 ntpd[1599]: 13 Mar 00:41:34 ntpd[1599]: Listen normally on 9 cilium_vxlan [fe80::f836:5dff:fe84:844b%6]:123 Mar 13 00:41:34.133793 ntpd[1599]: 13 Mar 00:41:34 ntpd[1599]: Listen normally on 10 lxc_health [fe80::7cc6:c3ff:fe36:e8b7%8]:123 Mar 13 00:41:34.133793 ntpd[1599]: 13 Mar 00:41:34 ntpd[1599]: Listen normally on 11 lxcf25323c3e108 [fe80::942d:ccff:fec1:af33%10]:123 Mar 13 00:41:34.133793 ntpd[1599]: 13 Mar 00:41:34 ntpd[1599]: Listen normally on 12 lxc9373729a5fdc [fe80::745b:35ff:feab:d4bd%12]:123 Mar 13 00:41:34.133440 ntpd[1599]: Listen normally on 7 cilium_net [fe80::ecb6:82ff:fe91:44cd%4]:123 Mar 13 00:41:34.133490 ntpd[1599]: Listen normally on 8 cilium_host [fe80::487c:33ff:fea3:fa6a%5]:123 Mar 13 00:41:34.133532 ntpd[1599]: Listen normally on 9 cilium_vxlan [fe80::f836:5dff:fe84:844b%6]:123 Mar 13 00:41:34.133576 ntpd[1599]: Listen normally on 10 lxc_health [fe80::7cc6:c3ff:fe36:e8b7%8]:123 Mar 13 00:41:34.133635 ntpd[1599]: Listen normally on 11 lxcf25323c3e108 [fe80::942d:ccff:fec1:af33%10]:123 Mar 13 00:41:34.133678 ntpd[1599]: Listen normally on 12 lxc9373729a5fdc [fe80::745b:35ff:feab:d4bd%12]:123 Mar 13 00:41:36.353778 containerd[1541]: time="2026-03-13T00:41:36.353202625Z" level=info msg="connecting to shim a4501658c56d2a124edbf966caac3add235d5c54f13262262c5589cc503bac51" address="unix:///run/containerd/s/469ced315a934f1f609e6927d304e6ee3c89bd4e5092e5aaf60ba61f32d51bb1" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:36.390772 containerd[1541]: time="2026-03-13T00:41:36.389092545Z" level=info msg="connecting to shim 8eefa839630ed9baadb0d5401a2e38ad57d6d4c5088bbbb55a1a781884143fcb" address="unix:///run/containerd/s/cfcfd9b099d30ca532fa1dff7a1467f5ab08c8b0b9a7ce84d84cf937723645bd" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:36.417180 systemd[1]: Started cri-containerd-a4501658c56d2a124edbf966caac3add235d5c54f13262262c5589cc503bac51.scope - libcontainer container a4501658c56d2a124edbf966caac3add235d5c54f13262262c5589cc503bac51. Mar 13 00:41:36.480182 systemd[1]: Started cri-containerd-8eefa839630ed9baadb0d5401a2e38ad57d6d4c5088bbbb55a1a781884143fcb.scope - libcontainer container 8eefa839630ed9baadb0d5401a2e38ad57d6d4c5088bbbb55a1a781884143fcb. Mar 13 00:41:36.593000 containerd[1541]: time="2026-03-13T00:41:36.592928320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-r9rrr,Uid:78b77736-c4e5-41ca-a720-3a4a3ddae154,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4501658c56d2a124edbf966caac3add235d5c54f13262262c5589cc503bac51\"" Mar 13 00:41:36.609283 containerd[1541]: time="2026-03-13T00:41:36.608868908Z" level=info msg="CreateContainer within sandbox \"a4501658c56d2a124edbf966caac3add235d5c54f13262262c5589cc503bac51\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:41:36.632830 containerd[1541]: time="2026-03-13T00:41:36.631170938Z" level=info msg="Container a6229729e9604472c940b014a17db9629deac0ae2d6de347080d9dc1d0bc5c4c: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:36.648003 containerd[1541]: time="2026-03-13T00:41:36.647938697Z" level=info msg="CreateContainer within sandbox \"a4501658c56d2a124edbf966caac3add235d5c54f13262262c5589cc503bac51\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6229729e9604472c940b014a17db9629deac0ae2d6de347080d9dc1d0bc5c4c\"" Mar 13 00:41:36.649624 containerd[1541]: time="2026-03-13T00:41:36.648957923Z" level=info msg="StartContainer for \"a6229729e9604472c940b014a17db9629deac0ae2d6de347080d9dc1d0bc5c4c\"" Mar 13 00:41:36.651667 containerd[1541]: time="2026-03-13T00:41:36.651581204Z" level=info msg="connecting to shim a6229729e9604472c940b014a17db9629deac0ae2d6de347080d9dc1d0bc5c4c" address="unix:///run/containerd/s/469ced315a934f1f609e6927d304e6ee3c89bd4e5092e5aaf60ba61f32d51bb1" protocol=ttrpc version=3 Mar 13 00:41:36.674164 containerd[1541]: time="2026-03-13T00:41:36.674116252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rhmv4,Uid:bbc4f76d-51f4-49bd-868e-f3c8f9362191,Namespace:kube-system,Attempt:0,} returns sandbox id \"8eefa839630ed9baadb0d5401a2e38ad57d6d4c5088bbbb55a1a781884143fcb\"" Mar 13 00:41:36.689161 containerd[1541]: time="2026-03-13T00:41:36.689100408Z" level=info msg="CreateContainer within sandbox \"8eefa839630ed9baadb0d5401a2e38ad57d6d4c5088bbbb55a1a781884143fcb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:41:36.693052 systemd[1]: Started cri-containerd-a6229729e9604472c940b014a17db9629deac0ae2d6de347080d9dc1d0bc5c4c.scope - libcontainer container a6229729e9604472c940b014a17db9629deac0ae2d6de347080d9dc1d0bc5c4c. Mar 13 00:41:36.711768 containerd[1541]: time="2026-03-13T00:41:36.710577634Z" level=info msg="Container 0d0be0547992a042d887a4ca6f2d03403bf314676ae211480308c88ae15016a8: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:36.721763 containerd[1541]: time="2026-03-13T00:41:36.721686406Z" level=info msg="CreateContainer within sandbox \"8eefa839630ed9baadb0d5401a2e38ad57d6d4c5088bbbb55a1a781884143fcb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d0be0547992a042d887a4ca6f2d03403bf314676ae211480308c88ae15016a8\"" Mar 13 00:41:36.727439 containerd[1541]: time="2026-03-13T00:41:36.725554265Z" level=info msg="StartContainer for \"0d0be0547992a042d887a4ca6f2d03403bf314676ae211480308c88ae15016a8\"" Mar 13 00:41:36.728258 containerd[1541]: time="2026-03-13T00:41:36.728203831Z" level=info msg="connecting to shim 0d0be0547992a042d887a4ca6f2d03403bf314676ae211480308c88ae15016a8" address="unix:///run/containerd/s/cfcfd9b099d30ca532fa1dff7a1467f5ab08c8b0b9a7ce84d84cf937723645bd" protocol=ttrpc version=3 Mar 13 00:41:36.780281 systemd[1]: Started cri-containerd-0d0be0547992a042d887a4ca6f2d03403bf314676ae211480308c88ae15016a8.scope - libcontainer container 0d0be0547992a042d887a4ca6f2d03403bf314676ae211480308c88ae15016a8. Mar 13 00:41:36.797615 containerd[1541]: time="2026-03-13T00:41:36.797538316Z" level=info msg="StartContainer for \"a6229729e9604472c940b014a17db9629deac0ae2d6de347080d9dc1d0bc5c4c\" returns successfully" Mar 13 00:41:36.853603 containerd[1541]: time="2026-03-13T00:41:36.853544526Z" level=info msg="StartContainer for \"0d0be0547992a042d887a4ca6f2d03403bf314676ae211480308c88ae15016a8\" returns successfully" Mar 13 00:41:37.320753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount655075341.mount: Deactivated successfully. Mar 13 00:41:37.396193 kubelet[2820]: I0313 00:41:37.396100 2820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rhmv4" podStartSLOduration=74.396074532 podStartE2EDuration="1m14.396074532s" podCreationTimestamp="2026-03-13 00:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:41:37.392362407 +0000 UTC m=+79.809662118" watchObservedRunningTime="2026-03-13 00:41:37.396074532 +0000 UTC m=+79.813374240" Mar 13 00:41:37.449918 kubelet[2820]: I0313 00:41:37.449839 2820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-r9rrr" podStartSLOduration=74.449814185 podStartE2EDuration="1m14.449814185s" podCreationTimestamp="2026-03-13 00:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:41:37.449708949 +0000 UTC m=+79.867008659" watchObservedRunningTime="2026-03-13 00:41:37.449814185 +0000 UTC m=+79.867113893" Mar 13 00:41:38.483644 systemd[1]: Started sshd@22-10.128.0.75:22-20.161.92.111:52402.service - OpenSSH per-connection server daemon (20.161.92.111:52402). Mar 13 00:41:38.736886 sshd[4258]: Accepted publickey for core from 20.161.92.111 port 52402 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:41:38.738821 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:38.746822 systemd-logind[1519]: New session 15 of user core. Mar 13 00:41:38.754031 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:41:38.943975 sshd[4264]: Connection closed by 20.161.92.111 port 52402 Mar 13 00:41:38.945975 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:38.953282 systemd[1]: sshd@22-10.128.0.75:22-20.161.92.111:52402.service: Deactivated successfully. Mar 13 00:41:38.958239 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:41:38.960029 systemd-logind[1519]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:41:38.963471 systemd-logind[1519]: Removed session 15. Mar 13 00:41:43.993537 systemd[1]: Started sshd@23-10.128.0.75:22-20.161.92.111:46156.service - OpenSSH per-connection server daemon (20.161.92.111:46156). Mar 13 00:41:44.251589 sshd[4276]: Accepted publickey for core from 20.161.92.111 port 46156 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:41:44.253522 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:44.263801 systemd-logind[1519]: New session 16 of user core. Mar 13 00:41:44.269157 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:41:44.458302 sshd[4279]: Connection closed by 20.161.92.111 port 46156 Mar 13 00:41:44.459268 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:44.466009 systemd[1]: sshd@23-10.128.0.75:22-20.161.92.111:46156.service: Deactivated successfully. Mar 13 00:41:44.471462 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:41:44.473676 systemd-logind[1519]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:41:44.477639 systemd-logind[1519]: Removed session 16. Mar 13 00:41:44.513799 systemd[1]: Started sshd@24-10.128.0.75:22-20.161.92.111:46162.service - OpenSSH per-connection server daemon (20.161.92.111:46162). Mar 13 00:41:44.792338 sshd[4290]: Accepted publickey for core from 20.161.92.111 port 46162 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:41:44.794285 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:44.803100 systemd-logind[1519]: New session 17 of user core. Mar 13 00:41:44.813219 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:41:45.094245 systemd[1]: Started sshd@25-10.128.0.75:22-77.90.185.16:65105.service - OpenSSH per-connection server daemon (77.90.185.16:65105). Mar 13 00:41:45.237382 sshd[4293]: Connection closed by 20.161.92.111 port 46162 Mar 13 00:41:45.238311 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:45.245128 systemd[1]: sshd@24-10.128.0.75:22-20.161.92.111:46162.service: Deactivated successfully. Mar 13 00:41:45.251776 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:41:45.257061 systemd-logind[1519]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:41:45.259189 systemd-logind[1519]: Removed session 17. Mar 13 00:41:45.275882 sshd[4300]: Connection closed by 77.90.185.16 port 65105 Mar 13 00:41:45.285826 systemd[1]: sshd@25-10.128.0.75:22-77.90.185.16:65105.service: Deactivated successfully. Mar 13 00:41:45.294552 systemd[1]: Started sshd@26-10.128.0.75:22-20.161.92.111:46168.service - OpenSSH per-connection server daemon (20.161.92.111:46168). Mar 13 00:41:45.548569 sshd[4308]: Accepted publickey for core from 20.161.92.111 port 46168 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:41:45.551361 sshd-session[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:45.561668 systemd-logind[1519]: New session 18 of user core. Mar 13 00:41:45.570298 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:41:46.397536 sshd[4311]: Connection closed by 20.161.92.111 port 46168 Mar 13 00:41:46.400061 sshd-session[4308]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:46.413419 systemd[1]: sshd@26-10.128.0.75:22-20.161.92.111:46168.service: Deactivated successfully. Mar 13 00:41:46.420126 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:41:46.424426 systemd-logind[1519]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:41:46.426356 systemd-logind[1519]: Removed session 18. Mar 13 00:41:46.442494 systemd[1]: Started sshd@27-10.128.0.75:22-20.161.92.111:46180.service - OpenSSH per-connection server daemon (20.161.92.111:46180). Mar 13 00:41:46.682871 sshd[4326]: Accepted publickey for core from 20.161.92.111 port 46180 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:41:46.684274 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:46.693028 systemd-logind[1519]: New session 19 of user core. Mar 13 00:41:46.697084 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:41:47.075435 sshd[4329]: Connection closed by 20.161.92.111 port 46180 Mar 13 00:41:47.076995 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:47.084006 systemd[1]: sshd@27-10.128.0.75:22-20.161.92.111:46180.service: Deactivated successfully. Mar 13 00:41:47.089678 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:41:47.091846 systemd-logind[1519]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:41:47.095409 systemd-logind[1519]: Removed session 19. Mar 13 00:41:47.115150 systemd[1]: Started sshd@28-10.128.0.75:22-20.161.92.111:46188.service - OpenSSH per-connection server daemon (20.161.92.111:46188). Mar 13 00:41:47.340288 sshd[4339]: Accepted publickey for core from 20.161.92.111 port 46188 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:41:47.342333 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:47.351511 systemd-logind[1519]: New session 20 of user core. Mar 13 00:41:47.356272 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 00:41:47.531836 sshd[4342]: Connection closed by 20.161.92.111 port 46188 Mar 13 00:41:47.532917 sshd-session[4339]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:47.540139 systemd-logind[1519]: Session 20 logged out. Waiting for processes to exit. Mar 13 00:41:47.540839 systemd[1]: sshd@28-10.128.0.75:22-20.161.92.111:46188.service: Deactivated successfully. Mar 13 00:41:47.546195 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 00:41:47.550034 systemd-logind[1519]: Removed session 20. Mar 13 00:41:52.587231 systemd[1]: Started sshd@29-10.128.0.75:22-20.161.92.111:60426.service - OpenSSH per-connection server daemon (20.161.92.111:60426). Mar 13 00:41:52.829110 sshd[4356]: Accepted publickey for core from 20.161.92.111 port 60426 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:41:52.831620 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:52.841110 systemd-logind[1519]: New session 21 of user core. Mar 13 00:41:52.847575 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 00:41:53.035187 sshd[4361]: Connection closed by 20.161.92.111 port 60426 Mar 13 00:41:53.035894 sshd-session[4356]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:53.045275 systemd[1]: sshd@29-10.128.0.75:22-20.161.92.111:60426.service: Deactivated successfully. Mar 13 00:41:53.051102 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 00:41:53.053374 systemd-logind[1519]: Session 21 logged out. Waiting for processes to exit. Mar 13 00:41:53.058489 systemd-logind[1519]: Removed session 21. Mar 13 00:41:55.373725 systemd[1]: sshd@9-10.128.0.75:22-183.250.89.44:49737.service: Deactivated successfully. Mar 13 00:41:58.093781 systemd[1]: Started sshd@30-10.128.0.75:22-20.161.92.111:60428.service - OpenSSH per-connection server daemon (20.161.92.111:60428). Mar 13 00:41:58.336643 sshd[4375]: Accepted publickey for core from 20.161.92.111 port 60428 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:41:58.340327 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:58.349051 systemd-logind[1519]: New session 22 of user core. Mar 13 00:41:58.358034 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 00:41:58.535118 sshd[4378]: Connection closed by 20.161.92.111 port 60428 Mar 13 00:41:58.536999 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:58.547704 systemd[1]: sshd@30-10.128.0.75:22-20.161.92.111:60428.service: Deactivated successfully. Mar 13 00:41:58.553095 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 00:41:58.554832 systemd-logind[1519]: Session 22 logged out. Waiting for processes to exit. Mar 13 00:41:58.557408 systemd-logind[1519]: Removed session 22. Mar 13 00:42:03.583878 systemd[1]: Started sshd@31-10.128.0.75:22-20.161.92.111:37846.service - OpenSSH per-connection server daemon (20.161.92.111:37846). Mar 13 00:42:03.838116 sshd[4390]: Accepted publickey for core from 20.161.92.111 port 37846 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:42:03.839918 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:03.850433 systemd-logind[1519]: New session 23 of user core. Mar 13 00:42:03.859280 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 00:42:04.041235 sshd[4393]: Connection closed by 20.161.92.111 port 37846 Mar 13 00:42:04.042980 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:04.050390 systemd-logind[1519]: Session 23 logged out. Waiting for processes to exit. Mar 13 00:42:04.051628 systemd[1]: sshd@31-10.128.0.75:22-20.161.92.111:37846.service: Deactivated successfully. Mar 13 00:42:04.056962 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 00:42:04.060282 systemd-logind[1519]: Removed session 23. Mar 13 00:42:04.089210 systemd[1]: Started sshd@32-10.128.0.75:22-20.161.92.111:37858.service - OpenSSH per-connection server daemon (20.161.92.111:37858). Mar 13 00:42:04.332785 sshd[4405]: Accepted publickey for core from 20.161.92.111 port 37858 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:42:04.333805 sshd-session[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:04.341719 systemd-logind[1519]: New session 24 of user core. Mar 13 00:42:04.347288 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 13 00:42:05.143775 systemd[1]: Started sshd@33-10.128.0.75:22-165.154.6.208:34596.service - OpenSSH per-connection server daemon (165.154.6.208:34596). Mar 13 00:42:06.261768 sshd[4416]: Invalid user intel from 165.154.6.208 port 34596 Mar 13 00:42:06.266791 containerd[1541]: time="2026-03-13T00:42:06.266017093Z" level=info msg="StopContainer for \"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453\" with timeout 30 (s)" Mar 13 00:42:06.270118 containerd[1541]: time="2026-03-13T00:42:06.270005996Z" level=info msg="Stop container \"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453\" with signal terminated" Mar 13 00:42:06.375042 containerd[1541]: time="2026-03-13T00:42:06.374973588Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:42:06.399079 systemd[1]: cri-containerd-5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453.scope: Deactivated successfully. Mar 13 00:42:06.404498 containerd[1541]: time="2026-03-13T00:42:06.404443959Z" level=info msg="received container exit event container_id:\"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453\" id:\"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453\" pid:3234 exited_at:{seconds:1773362526 nanos:403707254}" Mar 13 00:42:06.426144 containerd[1541]: time="2026-03-13T00:42:06.424884194Z" level=info msg="StopContainer for \"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0\" with timeout 2 (s)" Mar 13 00:42:06.427362 containerd[1541]: time="2026-03-13T00:42:06.427316106Z" level=info msg="Stop container \"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0\" with signal terminated" Mar 13 00:42:06.465165 systemd-networkd[1425]: lxc_health: Link DOWN Mar 13 00:42:06.465178 systemd-networkd[1425]: lxc_health: Lost carrier Mar 13 00:42:06.472079 sshd[4416]: Received disconnect from 165.154.6.208 port 34596:11: Bye Bye [preauth] Mar 13 00:42:06.472079 sshd[4416]: Disconnected from invalid user intel 165.154.6.208 port 34596 [preauth] Mar 13 00:42:06.489458 systemd[1]: sshd@33-10.128.0.75:22-165.154.6.208:34596.service: Deactivated successfully. Mar 13 00:42:06.521018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453-rootfs.mount: Deactivated successfully. Mar 13 00:42:06.545025 systemd[1]: cri-containerd-f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0.scope: Deactivated successfully. Mar 13 00:42:06.548522 systemd[1]: cri-containerd-f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0.scope: Consumed 10.392s CPU time, 125.1M memory peak, 128K read from disk, 13.3M written to disk. Mar 13 00:42:06.551371 containerd[1541]: time="2026-03-13T00:42:06.551316748Z" level=info msg="received container exit event container_id:\"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0\" id:\"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0\" pid:3538 exited_at:{seconds:1773362526 nanos:550092608}" Mar 13 00:42:06.574104 containerd[1541]: time="2026-03-13T00:42:06.574046477Z" level=info msg="StopContainer for \"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453\" returns successfully" Mar 13 00:42:06.574937 containerd[1541]: time="2026-03-13T00:42:06.574858287Z" level=info msg="StopPodSandbox for \"2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168\"" Mar 13 00:42:06.575131 containerd[1541]: time="2026-03-13T00:42:06.575075658Z" level=info msg="Container to stop \"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:42:06.598291 systemd[1]: cri-containerd-2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168.scope: Deactivated successfully. Mar 13 00:42:06.603662 containerd[1541]: time="2026-03-13T00:42:06.603588165Z" level=info msg="received sandbox exit event container_id:\"2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168\" id:\"2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168\" exit_status:137 exited_at:{seconds:1773362526 nanos:601368684}" monitor_name=podsandbox Mar 13 00:42:06.617692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0-rootfs.mount: Deactivated successfully. Mar 13 00:42:06.634191 containerd[1541]: time="2026-03-13T00:42:06.634129076Z" level=info msg="StopContainer for \"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0\" returns successfully" Mar 13 00:42:06.639637 containerd[1541]: time="2026-03-13T00:42:06.639440844Z" level=info msg="StopPodSandbox for \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\"" Mar 13 00:42:06.641652 containerd[1541]: time="2026-03-13T00:42:06.640972701Z" level=info msg="Container to stop \"9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:42:06.641652 containerd[1541]: time="2026-03-13T00:42:06.641031525Z" level=info msg="Container to stop \"8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:42:06.641652 containerd[1541]: time="2026-03-13T00:42:06.641075037Z" level=info msg="Container to stop \"0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:42:06.641652 containerd[1541]: time="2026-03-13T00:42:06.641099139Z" level=info msg="Container to stop \"e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:42:06.641652 containerd[1541]: time="2026-03-13T00:42:06.641153309Z" level=info msg="Container to stop \"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:42:06.670947 systemd[1]: cri-containerd-c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3.scope: Deactivated successfully. Mar 13 00:42:06.680196 containerd[1541]: time="2026-03-13T00:42:06.680036770Z" level=info msg="received sandbox exit event container_id:\"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" id:\"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" exit_status:137 exited_at:{seconds:1773362526 nanos:678206985}" monitor_name=podsandbox Mar 13 00:42:06.680588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168-rootfs.mount: Deactivated successfully. Mar 13 00:42:06.685922 containerd[1541]: time="2026-03-13T00:42:06.685852928Z" level=info msg="shim disconnected" id=2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168 namespace=k8s.io Mar 13 00:42:06.685922 containerd[1541]: time="2026-03-13T00:42:06.685893994Z" level=warning msg="cleaning up after shim disconnected" id=2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168 namespace=k8s.io Mar 13 00:42:06.686139 containerd[1541]: time="2026-03-13T00:42:06.685908228Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 00:42:06.716718 containerd[1541]: time="2026-03-13T00:42:06.716657739Z" level=info msg="TearDown network for sandbox \"2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168\" successfully" Mar 13 00:42:06.720022 containerd[1541]: time="2026-03-13T00:42:06.719839800Z" level=info msg="StopPodSandbox for \"2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168\" returns successfully" Mar 13 00:42:06.720022 containerd[1541]: time="2026-03-13T00:42:06.719848233Z" level=info msg="received sandbox container exit event sandbox_id:\"2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168\" exit_status:137 exited_at:{seconds:1773362526 nanos:601368684}" monitor_name=criService Mar 13 00:42:06.725800 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168-shm.mount: Deactivated successfully. Mar 13 00:42:06.753873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3-rootfs.mount: Deactivated successfully. Mar 13 00:42:06.760754 containerd[1541]: time="2026-03-13T00:42:06.760510125Z" level=info msg="shim disconnected" id=c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3 namespace=k8s.io Mar 13 00:42:06.761190 containerd[1541]: time="2026-03-13T00:42:06.761025352Z" level=warning msg="cleaning up after shim disconnected" id=c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3 namespace=k8s.io Mar 13 00:42:06.761190 containerd[1541]: time="2026-03-13T00:42:06.761055875Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 00:42:06.794435 containerd[1541]: time="2026-03-13T00:42:06.794258800Z" level=info msg="received sandbox container exit event sandbox_id:\"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" exit_status:137 exited_at:{seconds:1773362526 nanos:678206985}" monitor_name=criService Mar 13 00:42:06.794915 containerd[1541]: time="2026-03-13T00:42:06.794879364Z" level=info msg="TearDown network for sandbox \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" successfully" Mar 13 00:42:06.798985 containerd[1541]: time="2026-03-13T00:42:06.798830364Z" level=info msg="StopPodSandbox for \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" returns successfully" Mar 13 00:42:06.857830 kubelet[2820]: I0313 00:42:06.857751 2820 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dcdf42cb-e867-49f0-9c1a-cbdd31027443-cilium-config-path\") pod \"dcdf42cb-e867-49f0-9c1a-cbdd31027443\" (UID: \"dcdf42cb-e867-49f0-9c1a-cbdd31027443\") " Mar 13 00:42:06.857830 kubelet[2820]: I0313 00:42:06.857829 2820 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xht5\" (UniqueName: \"kubernetes.io/projected/dcdf42cb-e867-49f0-9c1a-cbdd31027443-kube-api-access-8xht5\") pod \"dcdf42cb-e867-49f0-9c1a-cbdd31027443\" (UID: \"dcdf42cb-e867-49f0-9c1a-cbdd31027443\") " Mar 13 00:42:06.863188 kubelet[2820]: I0313 00:42:06.863047 2820 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcdf42cb-e867-49f0-9c1a-cbdd31027443-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dcdf42cb-e867-49f0-9c1a-cbdd31027443" (UID: "dcdf42cb-e867-49f0-9c1a-cbdd31027443"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:42:06.866865 kubelet[2820]: I0313 00:42:06.866791 2820 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcdf42cb-e867-49f0-9c1a-cbdd31027443-kube-api-access-8xht5" (OuterVolumeSpecName: "kube-api-access-8xht5") pod "dcdf42cb-e867-49f0-9c1a-cbdd31027443" (UID: "dcdf42cb-e867-49f0-9c1a-cbdd31027443"). InnerVolumeSpecName "kube-api-access-8xht5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:42:06.958529 kubelet[2820]: I0313 00:42:06.958413 2820 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-cni-path\") pod \"04c74789-40bc-417d-adcb-3fe88f9ed86a\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " Mar 13 00:42:06.958529 kubelet[2820]: I0313 00:42:06.958500 2820 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h62q2\" (UniqueName: \"kubernetes.io/projected/04c74789-40bc-417d-adcb-3fe88f9ed86a-kube-api-access-h62q2\") pod \"04c74789-40bc-417d-adcb-3fe88f9ed86a\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " Mar 13 00:42:06.958529 kubelet[2820]: I0313 00:42:06.958529 2820 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-xtables-lock\") pod \"04c74789-40bc-417d-adcb-3fe88f9ed86a\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " Mar 13 00:42:06.959026 kubelet[2820]: I0313 00:42:06.958566 2820 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04c74789-40bc-417d-adcb-3fe88f9ed86a-cilium-config-path\") pod \"04c74789-40bc-417d-adcb-3fe88f9ed86a\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " Mar 13 00:42:06.959026 kubelet[2820]: I0313 00:42:06.958592 2820 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04c74789-40bc-417d-adcb-3fe88f9ed86a-hubble-tls\") pod \"04c74789-40bc-417d-adcb-3fe88f9ed86a\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " Mar 13 00:42:06.959026 kubelet[2820]: I0313 00:42:06.958631 2820 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-host-proc-sys-kernel\") pod \"04c74789-40bc-417d-adcb-3fe88f9ed86a\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " Mar 13 00:42:06.959026 kubelet[2820]: I0313 00:42:06.958629 2820 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-cni-path" (OuterVolumeSpecName: "cni-path") pod "04c74789-40bc-417d-adcb-3fe88f9ed86a" (UID: "04c74789-40bc-417d-adcb-3fe88f9ed86a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:42:06.959026 kubelet[2820]: I0313 00:42:06.958656 2820 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-hostproc\") pod \"04c74789-40bc-417d-adcb-3fe88f9ed86a\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " Mar 13 00:42:06.959026 kubelet[2820]: I0313 00:42:06.958679 2820 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-cilium-cgroup\") pod \"04c74789-40bc-417d-adcb-3fe88f9ed86a\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " Mar 13 00:42:06.959321 kubelet[2820]: I0313 00:42:06.958701 2820 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-lib-modules\") pod \"04c74789-40bc-417d-adcb-3fe88f9ed86a\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " Mar 13 00:42:06.959321 kubelet[2820]: I0313 00:42:06.958763 2820 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-etc-cni-netd\") pod \"04c74789-40bc-417d-adcb-3fe88f9ed86a\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " Mar 13 00:42:06.959321 kubelet[2820]: I0313 00:42:06.958786 2820 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-bpf-maps\") pod \"04c74789-40bc-417d-adcb-3fe88f9ed86a\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " Mar 13 00:42:06.959321 kubelet[2820]: I0313 00:42:06.958824 2820 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04c74789-40bc-417d-adcb-3fe88f9ed86a-clustermesh-secrets\") pod \"04c74789-40bc-417d-adcb-3fe88f9ed86a\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " Mar 13 00:42:06.959321 kubelet[2820]: I0313 00:42:06.958884 2820 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-host-proc-sys-net\") pod \"04c74789-40bc-417d-adcb-3fe88f9ed86a\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " Mar 13 00:42:06.959321 kubelet[2820]: I0313 00:42:06.958908 2820 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-cilium-run\") pod \"04c74789-40bc-417d-adcb-3fe88f9ed86a\" (UID: \"04c74789-40bc-417d-adcb-3fe88f9ed86a\") " Mar 13 00:42:06.959642 kubelet[2820]: I0313 00:42:06.958987 2820 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dcdf42cb-e867-49f0-9c1a-cbdd31027443-cilium-config-path\") on node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" DevicePath \"\"" Mar 13 00:42:06.959642 kubelet[2820]: I0313 00:42:06.959010 2820 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8xht5\" (UniqueName: \"kubernetes.io/projected/dcdf42cb-e867-49f0-9c1a-cbdd31027443-kube-api-access-8xht5\") on node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" DevicePath \"\"" Mar 13 00:42:06.959642 kubelet[2820]: I0313 00:42:06.959075 2820 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "04c74789-40bc-417d-adcb-3fe88f9ed86a" (UID: "04c74789-40bc-417d-adcb-3fe88f9ed86a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:42:06.960915 kubelet[2820]: I0313 00:42:06.959996 2820 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "04c74789-40bc-417d-adcb-3fe88f9ed86a" (UID: "04c74789-40bc-417d-adcb-3fe88f9ed86a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:42:06.960915 kubelet[2820]: I0313 00:42:06.960067 2820 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "04c74789-40bc-417d-adcb-3fe88f9ed86a" (UID: "04c74789-40bc-417d-adcb-3fe88f9ed86a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:42:06.960915 kubelet[2820]: I0313 00:42:06.960789 2820 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "04c74789-40bc-417d-adcb-3fe88f9ed86a" (UID: "04c74789-40bc-417d-adcb-3fe88f9ed86a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:42:06.960915 kubelet[2820]: I0313 00:42:06.960831 2820 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "04c74789-40bc-417d-adcb-3fe88f9ed86a" (UID: "04c74789-40bc-417d-adcb-3fe88f9ed86a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:42:06.960915 kubelet[2820]: I0313 00:42:06.960858 2820 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "04c74789-40bc-417d-adcb-3fe88f9ed86a" (UID: "04c74789-40bc-417d-adcb-3fe88f9ed86a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:42:06.965094 kubelet[2820]: I0313 00:42:06.965008 2820 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "04c74789-40bc-417d-adcb-3fe88f9ed86a" (UID: "04c74789-40bc-417d-adcb-3fe88f9ed86a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:42:06.967366 kubelet[2820]: I0313 00:42:06.967285 2820 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04c74789-40bc-417d-adcb-3fe88f9ed86a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "04c74789-40bc-417d-adcb-3fe88f9ed86a" (UID: "04c74789-40bc-417d-adcb-3fe88f9ed86a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:42:06.967796 kubelet[2820]: I0313 00:42:06.967761 2820 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "04c74789-40bc-417d-adcb-3fe88f9ed86a" (UID: "04c74789-40bc-417d-adcb-3fe88f9ed86a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:42:06.967973 kubelet[2820]: I0313 00:42:06.967796 2820 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-hostproc" (OuterVolumeSpecName: "hostproc") pod "04c74789-40bc-417d-adcb-3fe88f9ed86a" (UID: "04c74789-40bc-417d-adcb-3fe88f9ed86a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:42:06.969504 kubelet[2820]: I0313 00:42:06.969463 2820 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c74789-40bc-417d-adcb-3fe88f9ed86a-kube-api-access-h62q2" (OuterVolumeSpecName: "kube-api-access-h62q2") pod "04c74789-40bc-417d-adcb-3fe88f9ed86a" (UID: "04c74789-40bc-417d-adcb-3fe88f9ed86a"). InnerVolumeSpecName "kube-api-access-h62q2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:42:06.969746 kubelet[2820]: I0313 00:42:06.969650 2820 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04c74789-40bc-417d-adcb-3fe88f9ed86a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "04c74789-40bc-417d-adcb-3fe88f9ed86a" (UID: "04c74789-40bc-417d-adcb-3fe88f9ed86a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:42:06.972152 kubelet[2820]: I0313 00:42:06.972070 2820 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c74789-40bc-417d-adcb-3fe88f9ed86a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "04c74789-40bc-417d-adcb-3fe88f9ed86a" (UID: "04c74789-40bc-417d-adcb-3fe88f9ed86a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:42:07.060673 kubelet[2820]: I0313 00:42:07.060286 2820 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-host-proc-sys-kernel\") on node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" DevicePath \"\"" Mar 13 00:42:07.060673 kubelet[2820]: I0313 00:42:07.060384 2820 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-hostproc\") on node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" DevicePath \"\"" Mar 13 00:42:07.060673 kubelet[2820]: I0313 00:42:07.060417 2820 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-cilium-cgroup\") on node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" DevicePath \"\"" Mar 13 00:42:07.060673 kubelet[2820]: I0313 00:42:07.060509 2820 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-lib-modules\") on node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" DevicePath \"\"" Mar 13 00:42:07.060673 kubelet[2820]: I0313 00:42:07.060585 2820 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-etc-cni-netd\") on node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" DevicePath \"\"" Mar 13 00:42:07.060673 kubelet[2820]: I0313 00:42:07.060604 2820 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-bpf-maps\") on node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" DevicePath \"\"" Mar 13 00:42:07.060673 kubelet[2820]: I0313 00:42:07.060627 2820 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04c74789-40bc-417d-adcb-3fe88f9ed86a-clustermesh-secrets\") on node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" DevicePath \"\"" Mar 13 00:42:07.061403 kubelet[2820]: I0313 00:42:07.060644 2820 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-host-proc-sys-net\") on node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" DevicePath \"\"" Mar 13 00:42:07.061403 kubelet[2820]: I0313 00:42:07.060659 2820 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-cilium-run\") on node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" DevicePath \"\"" Mar 13 00:42:07.061403 kubelet[2820]: I0313 00:42:07.060675 2820 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-cni-path\") on node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" DevicePath \"\"" Mar 13 00:42:07.061403 kubelet[2820]: I0313 00:42:07.060690 2820 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h62q2\" (UniqueName: \"kubernetes.io/projected/04c74789-40bc-417d-adcb-3fe88f9ed86a-kube-api-access-h62q2\") on node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" DevicePath \"\"" Mar 13 00:42:07.062320 kubelet[2820]: I0313 00:42:07.060706 2820 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04c74789-40bc-417d-adcb-3fe88f9ed86a-xtables-lock\") on node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" DevicePath \"\"" Mar 13 00:42:07.062320 kubelet[2820]: I0313 00:42:07.062257 2820 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04c74789-40bc-417d-adcb-3fe88f9ed86a-cilium-config-path\") on node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" DevicePath \"\"" Mar 13 00:42:07.062320 kubelet[2820]: I0313 00:42:07.062288 2820 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04c74789-40bc-417d-adcb-3fe88f9ed86a-hubble-tls\") on node \"ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136\" DevicePath \"\"" Mar 13 00:42:07.484442 kubelet[2820]: I0313 00:42:07.484372 2820 scope.go:117] "RemoveContainer" containerID="5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453" Mar 13 00:42:07.491294 containerd[1541]: time="2026-03-13T00:42:07.490790046Z" level=info msg="RemoveContainer for \"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453\"" Mar 13 00:42:07.497437 systemd[1]: Removed slice kubepods-besteffort-poddcdf42cb_e867_49f0_9c1a_cbdd31027443.slice - libcontainer container kubepods-besteffort-poddcdf42cb_e867_49f0_9c1a_cbdd31027443.slice. Mar 13 00:42:07.512213 containerd[1541]: time="2026-03-13T00:42:07.512139586Z" level=info msg="RemoveContainer for \"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453\" returns successfully" Mar 13 00:42:07.514252 kubelet[2820]: I0313 00:42:07.514196 2820 scope.go:117] "RemoveContainer" containerID="5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453" Mar 13 00:42:07.515219 containerd[1541]: time="2026-03-13T00:42:07.515019163Z" level=error msg="ContainerStatus for \"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453\": not found" Mar 13 00:42:07.519602 containerd[1541]: time="2026-03-13T00:42:07.519365245Z" level=info msg="RemoveContainer for \"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0\"" Mar 13 00:42:07.519677 kubelet[2820]: E0313 00:42:07.515609 2820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453\": not found" containerID="5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453" Mar 13 00:42:07.519677 kubelet[2820]: I0313 00:42:07.515670 2820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453"} err="failed to get container status \"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453\": rpc error: code = NotFound desc = an error occurred when try to find container \"5505570b01b2c40ac5428287f70834b32ede020e576bcaa9e9b89eea59928453\": not found" Mar 13 00:42:07.519677 kubelet[2820]: I0313 00:42:07.515785 2820 scope.go:117] "RemoveContainer" containerID="f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0" Mar 13 00:42:07.527242 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3-shm.mount: Deactivated successfully. Mar 13 00:42:07.527668 systemd[1]: var-lib-kubelet-pods-04c74789\x2d40bc\x2d417d\x2dadcb\x2d3fe88f9ed86a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 13 00:42:07.528129 systemd[1]: var-lib-kubelet-pods-04c74789\x2d40bc\x2d417d\x2dadcb\x2d3fe88f9ed86a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 13 00:42:07.528584 systemd[1]: var-lib-kubelet-pods-04c74789\x2d40bc\x2d417d\x2dadcb\x2d3fe88f9ed86a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh62q2.mount: Deactivated successfully. Mar 13 00:42:07.529013 systemd[1]: var-lib-kubelet-pods-dcdf42cb\x2de867\x2d49f0\x2d9c1a\x2dcbdd31027443-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8xht5.mount: Deactivated successfully. Mar 13 00:42:07.537462 containerd[1541]: time="2026-03-13T00:42:07.537213142Z" level=info msg="RemoveContainer for \"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0\" returns successfully" Mar 13 00:42:07.537949 kubelet[2820]: I0313 00:42:07.537635 2820 scope.go:117] "RemoveContainer" containerID="0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4" Mar 13 00:42:07.543903 systemd[1]: Removed slice kubepods-burstable-pod04c74789_40bc_417d_adcb_3fe88f9ed86a.slice - libcontainer container kubepods-burstable-pod04c74789_40bc_417d_adcb_3fe88f9ed86a.slice. Mar 13 00:42:07.544933 containerd[1541]: time="2026-03-13T00:42:07.543911520Z" level=info msg="RemoveContainer for \"0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4\"" Mar 13 00:42:07.544079 systemd[1]: kubepods-burstable-pod04c74789_40bc_417d_adcb_3fe88f9ed86a.slice: Consumed 10.581s CPU time, 125.6M memory peak, 128K read from disk, 13.3M written to disk. Mar 13 00:42:07.557215 containerd[1541]: time="2026-03-13T00:42:07.557076371Z" level=info msg="RemoveContainer for \"0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4\" returns successfully" Mar 13 00:42:07.557915 kubelet[2820]: I0313 00:42:07.557828 2820 scope.go:117] "RemoveContainer" containerID="8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a" Mar 13 00:42:07.563307 containerd[1541]: time="2026-03-13T00:42:07.563207696Z" level=info msg="RemoveContainer for \"8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a\"" Mar 13 00:42:07.574580 containerd[1541]: time="2026-03-13T00:42:07.574493014Z" level=info msg="RemoveContainer for \"8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a\" returns successfully" Mar 13 00:42:07.575166 kubelet[2820]: I0313 00:42:07.575132 2820 scope.go:117] "RemoveContainer" containerID="9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678" Mar 13 00:42:07.578624 containerd[1541]: time="2026-03-13T00:42:07.577904020Z" level=info msg="RemoveContainer for \"9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678\"" Mar 13 00:42:07.584316 containerd[1541]: time="2026-03-13T00:42:07.584194536Z" level=info msg="RemoveContainer for \"9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678\" returns successfully" Mar 13 00:42:07.585126 kubelet[2820]: I0313 00:42:07.585067 2820 scope.go:117] "RemoveContainer" containerID="e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809" Mar 13 00:42:07.590006 containerd[1541]: time="2026-03-13T00:42:07.589936569Z" level=info msg="RemoveContainer for \"e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809\"" Mar 13 00:42:07.596944 containerd[1541]: time="2026-03-13T00:42:07.596804015Z" level=info msg="RemoveContainer for \"e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809\" returns successfully" Mar 13 00:42:07.597387 kubelet[2820]: I0313 00:42:07.597238 2820 scope.go:117] "RemoveContainer" containerID="f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0" Mar 13 00:42:07.598900 containerd[1541]: time="2026-03-13T00:42:07.598155697Z" level=error msg="ContainerStatus for \"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0\": not found" Mar 13 00:42:07.598900 containerd[1541]: time="2026-03-13T00:42:07.598835356Z" level=error msg="ContainerStatus for \"0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4\": not found" Mar 13 00:42:07.599137 kubelet[2820]: E0313 00:42:07.598418 2820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0\": not found" containerID="f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0" Mar 13 00:42:07.599137 kubelet[2820]: I0313 00:42:07.598468 2820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0"} err="failed to get container status \"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6f10b6ee504936c17b4b8a62a9ebf168e272ad6dddc23530d8261e438ecf4a0\": not found" Mar 13 00:42:07.599137 kubelet[2820]: I0313 00:42:07.598508 2820 scope.go:117] "RemoveContainer" containerID="0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4" Mar 13 00:42:07.599757 kubelet[2820]: E0313 00:42:07.599575 2820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4\": not found" containerID="0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4" Mar 13 00:42:07.599757 kubelet[2820]: I0313 00:42:07.599613 2820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4"} err="failed to get container status \"0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d970422f49ef2f8186a5bc939b399e612172a2dbb269932873935eed76924f4\": not found" Mar 13 00:42:07.599757 kubelet[2820]: I0313 00:42:07.599640 2820 scope.go:117] "RemoveContainer" containerID="8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a" Mar 13 00:42:07.600536 containerd[1541]: time="2026-03-13T00:42:07.600444081Z" level=error msg="ContainerStatus for \"8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a\": not found" Mar 13 00:42:07.600947 kubelet[2820]: E0313 00:42:07.600786 2820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a\": not found" containerID="8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a" Mar 13 00:42:07.600947 kubelet[2820]: I0313 00:42:07.600827 2820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a"} err="failed to get container status \"8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d5da738f32d63f176585c5526fbeb321a5c29ad169c8fdb4c4e36bc47b46d0a\": not found" Mar 13 00:42:07.600947 kubelet[2820]: I0313 00:42:07.600847 2820 scope.go:117] "RemoveContainer" containerID="9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678" Mar 13 00:42:07.601584 containerd[1541]: time="2026-03-13T00:42:07.601534561Z" level=error msg="ContainerStatus for \"9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678\": not found" Mar 13 00:42:07.602695 kubelet[2820]: E0313 00:42:07.601889 2820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678\": not found" containerID="9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678" Mar 13 00:42:07.602695 kubelet[2820]: I0313 00:42:07.602083 2820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678"} err="failed to get container status \"9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678\": rpc error: code = NotFound desc = an error occurred when try to find container \"9da87fd51e5db3392d41dfc596b8060f671b57afe0ceb92c34481cd7156ee678\": not found" Mar 13 00:42:07.602695 kubelet[2820]: I0313 00:42:07.602167 2820 scope.go:117] "RemoveContainer" containerID="e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809" Mar 13 00:42:07.603177 containerd[1541]: time="2026-03-13T00:42:07.602585634Z" level=error msg="ContainerStatus for \"e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809\": not found" Mar 13 00:42:07.603685 kubelet[2820]: E0313 00:42:07.603591 2820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809\": not found" containerID="e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809" Mar 13 00:42:07.603685 kubelet[2820]: I0313 00:42:07.603641 2820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809"} err="failed to get container status \"e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809\": rpc error: code = NotFound desc = an error occurred when try to find container \"e57cfc53d465fa2e0f28869845049f91cf66c5489cc5c92f13dbda5d2bf84809\": not found" Mar 13 00:42:07.930905 kubelet[2820]: I0313 00:42:07.930845 2820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04c74789-40bc-417d-adcb-3fe88f9ed86a" path="/var/lib/kubelet/pods/04c74789-40bc-417d-adcb-3fe88f9ed86a/volumes" Mar 13 00:42:07.931926 kubelet[2820]: I0313 00:42:07.931891 2820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcdf42cb-e867-49f0-9c1a-cbdd31027443" path="/var/lib/kubelet/pods/dcdf42cb-e867-49f0-9c1a-cbdd31027443/volumes" Mar 13 00:42:08.088505 kubelet[2820]: E0313 00:42:08.088445 2820 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 13 00:42:08.191137 sshd[4408]: Connection closed by 20.161.92.111 port 37858 Mar 13 00:42:08.192060 sshd-session[4405]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:08.200339 systemd[1]: sshd@32-10.128.0.75:22-20.161.92.111:37858.service: Deactivated successfully. Mar 13 00:42:08.203612 systemd[1]: session-24.scope: Deactivated successfully. Mar 13 00:42:08.203982 systemd[1]: session-24.scope: Consumed 1.205s CPU time, 24M memory peak. Mar 13 00:42:08.205216 systemd-logind[1519]: Session 24 logged out. Waiting for processes to exit. Mar 13 00:42:08.208103 systemd-logind[1519]: Removed session 24. Mar 13 00:42:08.244332 systemd[1]: Started sshd@34-10.128.0.75:22-20.161.92.111:37874.service - OpenSSH per-connection server daemon (20.161.92.111:37874). Mar 13 00:42:08.491595 sshd[4562]: Accepted publickey for core from 20.161.92.111 port 37874 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:42:08.493135 sshd-session[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:08.501876 systemd-logind[1519]: New session 25 of user core. Mar 13 00:42:08.508175 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 13 00:42:09.132692 ntpd[1599]: Deleting 10 lxc_health, [fe80::7cc6:c3ff:fe36:e8b7%8]:123, stats: received=0, sent=0, dropped=0, active_time=35 secs Mar 13 00:42:09.133923 ntpd[1599]: 13 Mar 00:42:09 ntpd[1599]: Deleting 10 lxc_health, [fe80::7cc6:c3ff:fe36:e8b7%8]:123, stats: received=0, sent=0, dropped=0, active_time=35 secs Mar 13 00:42:09.325377 sshd[4565]: Connection closed by 20.161.92.111 port 37874 Mar 13 00:42:09.326605 sshd-session[4562]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:09.340473 systemd[1]: sshd@34-10.128.0.75:22-20.161.92.111:37874.service: Deactivated successfully. Mar 13 00:42:09.351406 systemd[1]: session-25.scope: Deactivated successfully. Mar 13 00:42:09.379917 systemd-logind[1519]: Session 25 logged out. Waiting for processes to exit. Mar 13 00:42:09.385975 systemd[1]: Created slice kubepods-burstable-pod600da5ee_366d_4acd_87e5_0a62911e0ead.slice - libcontainer container kubepods-burstable-pod600da5ee_366d_4acd_87e5_0a62911e0ead.slice. Mar 13 00:42:09.391575 systemd[1]: Started sshd@35-10.128.0.75:22-20.161.92.111:37876.service - OpenSSH per-connection server daemon (20.161.92.111:37876). Mar 13 00:42:09.396512 systemd-logind[1519]: Removed session 25. Mar 13 00:42:09.479014 kubelet[2820]: I0313 00:42:09.478964 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/600da5ee-366d-4acd-87e5-0a62911e0ead-bpf-maps\") pod \"cilium-q589t\" (UID: \"600da5ee-366d-4acd-87e5-0a62911e0ead\") " pod="kube-system/cilium-q589t" Mar 13 00:42:09.479014 kubelet[2820]: I0313 00:42:09.479020 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/600da5ee-366d-4acd-87e5-0a62911e0ead-hostproc\") pod \"cilium-q589t\" (UID: \"600da5ee-366d-4acd-87e5-0a62911e0ead\") " pod="kube-system/cilium-q589t" Mar 13 00:42:09.480962 kubelet[2820]: I0313 00:42:09.479050 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/600da5ee-366d-4acd-87e5-0a62911e0ead-hubble-tls\") pod \"cilium-q589t\" (UID: \"600da5ee-366d-4acd-87e5-0a62911e0ead\") " pod="kube-system/cilium-q589t" Mar 13 00:42:09.480962 kubelet[2820]: I0313 00:42:09.479082 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmdz5\" (UniqueName: \"kubernetes.io/projected/600da5ee-366d-4acd-87e5-0a62911e0ead-kube-api-access-xmdz5\") pod \"cilium-q589t\" (UID: \"600da5ee-366d-4acd-87e5-0a62911e0ead\") " pod="kube-system/cilium-q589t" Mar 13 00:42:09.480962 kubelet[2820]: I0313 00:42:09.479113 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/600da5ee-366d-4acd-87e5-0a62911e0ead-cni-path\") pod \"cilium-q589t\" (UID: \"600da5ee-366d-4acd-87e5-0a62911e0ead\") " pod="kube-system/cilium-q589t" Mar 13 00:42:09.480962 kubelet[2820]: I0313 00:42:09.479139 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/600da5ee-366d-4acd-87e5-0a62911e0ead-cilium-cgroup\") pod \"cilium-q589t\" (UID: \"600da5ee-366d-4acd-87e5-0a62911e0ead\") " pod="kube-system/cilium-q589t" Mar 13 00:42:09.480962 kubelet[2820]: I0313 00:42:09.479165 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/600da5ee-366d-4acd-87e5-0a62911e0ead-host-proc-sys-net\") pod \"cilium-q589t\" (UID: \"600da5ee-366d-4acd-87e5-0a62911e0ead\") " pod="kube-system/cilium-q589t" Mar 13 00:42:09.480962 kubelet[2820]: I0313 00:42:09.479227 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/600da5ee-366d-4acd-87e5-0a62911e0ead-lib-modules\") pod \"cilium-q589t\" (UID: \"600da5ee-366d-4acd-87e5-0a62911e0ead\") " pod="kube-system/cilium-q589t" Mar 13 00:42:09.482582 kubelet[2820]: I0313 00:42:09.479254 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/600da5ee-366d-4acd-87e5-0a62911e0ead-clustermesh-secrets\") pod \"cilium-q589t\" (UID: \"600da5ee-366d-4acd-87e5-0a62911e0ead\") " pod="kube-system/cilium-q589t" Mar 13 00:42:09.482582 kubelet[2820]: I0313 00:42:09.479284 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/600da5ee-366d-4acd-87e5-0a62911e0ead-cilium-config-path\") pod \"cilium-q589t\" (UID: \"600da5ee-366d-4acd-87e5-0a62911e0ead\") " pod="kube-system/cilium-q589t" Mar 13 00:42:09.482582 kubelet[2820]: I0313 00:42:09.479306 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/600da5ee-366d-4acd-87e5-0a62911e0ead-cilium-ipsec-secrets\") pod \"cilium-q589t\" (UID: \"600da5ee-366d-4acd-87e5-0a62911e0ead\") " pod="kube-system/cilium-q589t" Mar 13 00:42:09.482582 kubelet[2820]: I0313 00:42:09.479331 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/600da5ee-366d-4acd-87e5-0a62911e0ead-host-proc-sys-kernel\") pod \"cilium-q589t\" (UID: \"600da5ee-366d-4acd-87e5-0a62911e0ead\") " pod="kube-system/cilium-q589t" Mar 13 00:42:09.482582 kubelet[2820]: I0313 00:42:09.479378 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/600da5ee-366d-4acd-87e5-0a62911e0ead-cilium-run\") pod \"cilium-q589t\" (UID: \"600da5ee-366d-4acd-87e5-0a62911e0ead\") " pod="kube-system/cilium-q589t" Mar 13 00:42:09.482904 kubelet[2820]: I0313 00:42:09.479408 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/600da5ee-366d-4acd-87e5-0a62911e0ead-etc-cni-netd\") pod \"cilium-q589t\" (UID: \"600da5ee-366d-4acd-87e5-0a62911e0ead\") " pod="kube-system/cilium-q589t" Mar 13 00:42:09.482904 kubelet[2820]: I0313 00:42:09.479437 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/600da5ee-366d-4acd-87e5-0a62911e0ead-xtables-lock\") pod \"cilium-q589t\" (UID: \"600da5ee-366d-4acd-87e5-0a62911e0ead\") " pod="kube-system/cilium-q589t" Mar 13 00:42:09.699483 sshd[4575]: Accepted publickey for core from 20.161.92.111 port 37876 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:42:09.701366 sshd-session[4575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:09.709916 containerd[1541]: time="2026-03-13T00:42:09.709791513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q589t,Uid:600da5ee-366d-4acd-87e5-0a62911e0ead,Namespace:kube-system,Attempt:0,}" Mar 13 00:42:09.711394 systemd-logind[1519]: New session 26 of user core. Mar 13 00:42:09.717922 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 13 00:42:09.747167 containerd[1541]: time="2026-03-13T00:42:09.747082453Z" level=info msg="connecting to shim c2ba55a12ce4a984d2a22bf7fab4aace0bfcc41b14c69ee60667c6340593d760" address="unix:///run/containerd/s/7db894f40c1f30141f1e32fe9f74ddd4bd9171cc2926d52d0e588b526180c513" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:42:09.798044 systemd[1]: Started cri-containerd-c2ba55a12ce4a984d2a22bf7fab4aace0bfcc41b14c69ee60667c6340593d760.scope - libcontainer container c2ba55a12ce4a984d2a22bf7fab4aace0bfcc41b14c69ee60667c6340593d760. Mar 13 00:42:09.810934 sshd[4582]: Connection closed by 20.161.92.111 port 37876 Mar 13 00:42:09.811710 sshd-session[4575]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:09.820443 systemd-logind[1519]: Session 26 logged out. Waiting for processes to exit. Mar 13 00:42:09.820712 systemd[1]: sshd@35-10.128.0.75:22-20.161.92.111:37876.service: Deactivated successfully. Mar 13 00:42:09.824989 systemd[1]: session-26.scope: Deactivated successfully. Mar 13 00:42:09.833540 systemd-logind[1519]: Removed session 26. Mar 13 00:42:09.865212 systemd[1]: Started sshd@36-10.128.0.75:22-20.161.92.111:37880.service - OpenSSH per-connection server daemon (20.161.92.111:37880). Mar 13 00:42:09.873950 containerd[1541]: time="2026-03-13T00:42:09.873840280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q589t,Uid:600da5ee-366d-4acd-87e5-0a62911e0ead,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2ba55a12ce4a984d2a22bf7fab4aace0bfcc41b14c69ee60667c6340593d760\"" Mar 13 00:42:09.886687 containerd[1541]: time="2026-03-13T00:42:09.886048715Z" level=info msg="CreateContainer within sandbox \"c2ba55a12ce4a984d2a22bf7fab4aace0bfcc41b14c69ee60667c6340593d760\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 13 00:42:09.909490 containerd[1541]: time="2026-03-13T00:42:09.909420162Z" level=info msg="Container 9df2b5244e04d02aa3bbe965c88e03423a23dad0cceaf6e539b3f3d90eb9acf0: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:09.919788 containerd[1541]: time="2026-03-13T00:42:09.919698096Z" level=info msg="CreateContainer within sandbox \"c2ba55a12ce4a984d2a22bf7fab4aace0bfcc41b14c69ee60667c6340593d760\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9df2b5244e04d02aa3bbe965c88e03423a23dad0cceaf6e539b3f3d90eb9acf0\"" Mar 13 00:42:09.923000 containerd[1541]: time="2026-03-13T00:42:09.921821557Z" level=info msg="StartContainer for \"9df2b5244e04d02aa3bbe965c88e03423a23dad0cceaf6e539b3f3d90eb9acf0\"" Mar 13 00:42:09.924869 containerd[1541]: time="2026-03-13T00:42:09.924817056Z" level=info msg="connecting to shim 9df2b5244e04d02aa3bbe965c88e03423a23dad0cceaf6e539b3f3d90eb9acf0" address="unix:///run/containerd/s/7db894f40c1f30141f1e32fe9f74ddd4bd9171cc2926d52d0e588b526180c513" protocol=ttrpc version=3 Mar 13 00:42:09.972074 systemd[1]: Started cri-containerd-9df2b5244e04d02aa3bbe965c88e03423a23dad0cceaf6e539b3f3d90eb9acf0.scope - libcontainer container 9df2b5244e04d02aa3bbe965c88e03423a23dad0cceaf6e539b3f3d90eb9acf0. Mar 13 00:42:10.035930 containerd[1541]: time="2026-03-13T00:42:10.035885463Z" level=info msg="StartContainer for \"9df2b5244e04d02aa3bbe965c88e03423a23dad0cceaf6e539b3f3d90eb9acf0\" returns successfully" Mar 13 00:42:10.054197 systemd[1]: cri-containerd-9df2b5244e04d02aa3bbe965c88e03423a23dad0cceaf6e539b3f3d90eb9acf0.scope: Deactivated successfully. Mar 13 00:42:10.063645 containerd[1541]: time="2026-03-13T00:42:10.063555976Z" level=info msg="received container exit event container_id:\"9df2b5244e04d02aa3bbe965c88e03423a23dad0cceaf6e539b3f3d90eb9acf0\" id:\"9df2b5244e04d02aa3bbe965c88e03423a23dad0cceaf6e539b3f3d90eb9acf0\" pid:4650 exited_at:{seconds:1773362530 nanos:61562099}" Mar 13 00:42:10.143871 sshd[4633]: Accepted publickey for core from 20.161.92.111 port 37880 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:42:10.146544 sshd-session[4633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:10.155240 systemd-logind[1519]: New session 27 of user core. Mar 13 00:42:10.161079 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 13 00:42:10.534772 containerd[1541]: time="2026-03-13T00:42:10.534644442Z" level=info msg="CreateContainer within sandbox \"c2ba55a12ce4a984d2a22bf7fab4aace0bfcc41b14c69ee60667c6340593d760\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 13 00:42:10.549781 containerd[1541]: time="2026-03-13T00:42:10.549432040Z" level=info msg="Container 1a2b3335a245a75a4d59df07093602ee5612b579cb23ebbf5cab9c059d2aac63: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:10.562545 containerd[1541]: time="2026-03-13T00:42:10.562460903Z" level=info msg="CreateContainer within sandbox \"c2ba55a12ce4a984d2a22bf7fab4aace0bfcc41b14c69ee60667c6340593d760\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1a2b3335a245a75a4d59df07093602ee5612b579cb23ebbf5cab9c059d2aac63\"" Mar 13 00:42:10.564242 containerd[1541]: time="2026-03-13T00:42:10.563953029Z" level=info msg="StartContainer for \"1a2b3335a245a75a4d59df07093602ee5612b579cb23ebbf5cab9c059d2aac63\"" Mar 13 00:42:10.565757 containerd[1541]: time="2026-03-13T00:42:10.565695279Z" level=info msg="connecting to shim 1a2b3335a245a75a4d59df07093602ee5612b579cb23ebbf5cab9c059d2aac63" address="unix:///run/containerd/s/7db894f40c1f30141f1e32fe9f74ddd4bd9171cc2926d52d0e588b526180c513" protocol=ttrpc version=3 Mar 13 00:42:10.595949 systemd[1]: Started cri-containerd-1a2b3335a245a75a4d59df07093602ee5612b579cb23ebbf5cab9c059d2aac63.scope - libcontainer container 1a2b3335a245a75a4d59df07093602ee5612b579cb23ebbf5cab9c059d2aac63. Mar 13 00:42:10.649221 containerd[1541]: time="2026-03-13T00:42:10.649080095Z" level=info msg="StartContainer for \"1a2b3335a245a75a4d59df07093602ee5612b579cb23ebbf5cab9c059d2aac63\" returns successfully" Mar 13 00:42:10.659877 systemd[1]: cri-containerd-1a2b3335a245a75a4d59df07093602ee5612b579cb23ebbf5cab9c059d2aac63.scope: Deactivated successfully. Mar 13 00:42:10.662029 containerd[1541]: time="2026-03-13T00:42:10.661976802Z" level=info msg="received container exit event container_id:\"1a2b3335a245a75a4d59df07093602ee5612b579cb23ebbf5cab9c059d2aac63\" id:\"1a2b3335a245a75a4d59df07093602ee5612b579cb23ebbf5cab9c059d2aac63\" pid:4702 exited_at:{seconds:1773362530 nanos:660789474}" Mar 13 00:42:10.701404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a2b3335a245a75a4d59df07093602ee5612b579cb23ebbf5cab9c059d2aac63-rootfs.mount: Deactivated successfully. Mar 13 00:42:10.977334 kubelet[2820]: I0313 00:42:10.977244 2820 setters.go:543] "Node became not ready" node="ci-4459-2-4-nightly-20260312-2100-a2f4eb32459bc881a136" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T00:42:10Z","lastTransitionTime":"2026-03-13T00:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 13 00:42:11.542885 containerd[1541]: time="2026-03-13T00:42:11.542818720Z" level=info msg="CreateContainer within sandbox \"c2ba55a12ce4a984d2a22bf7fab4aace0bfcc41b14c69ee60667c6340593d760\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 13 00:42:11.561929 containerd[1541]: time="2026-03-13T00:42:11.561869309Z" level=info msg="Container bb3aeeec812ff4ae44e135629838e64acc35f51210a29e99cb6f2c26b2a13514: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:11.577821 containerd[1541]: time="2026-03-13T00:42:11.577753197Z" level=info msg="CreateContainer within sandbox \"c2ba55a12ce4a984d2a22bf7fab4aace0bfcc41b14c69ee60667c6340593d760\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bb3aeeec812ff4ae44e135629838e64acc35f51210a29e99cb6f2c26b2a13514\"" Mar 13 00:42:11.578756 containerd[1541]: time="2026-03-13T00:42:11.578603570Z" level=info msg="StartContainer for \"bb3aeeec812ff4ae44e135629838e64acc35f51210a29e99cb6f2c26b2a13514\"" Mar 13 00:42:11.581004 containerd[1541]: time="2026-03-13T00:42:11.580951312Z" level=info msg="connecting to shim bb3aeeec812ff4ae44e135629838e64acc35f51210a29e99cb6f2c26b2a13514" address="unix:///run/containerd/s/7db894f40c1f30141f1e32fe9f74ddd4bd9171cc2926d52d0e588b526180c513" protocol=ttrpc version=3 Mar 13 00:42:11.615006 systemd[1]: Started cri-containerd-bb3aeeec812ff4ae44e135629838e64acc35f51210a29e99cb6f2c26b2a13514.scope - libcontainer container bb3aeeec812ff4ae44e135629838e64acc35f51210a29e99cb6f2c26b2a13514. Mar 13 00:42:11.739409 systemd[1]: cri-containerd-bb3aeeec812ff4ae44e135629838e64acc35f51210a29e99cb6f2c26b2a13514.scope: Deactivated successfully. Mar 13 00:42:11.741568 containerd[1541]: time="2026-03-13T00:42:11.741492219Z" level=info msg="StartContainer for \"bb3aeeec812ff4ae44e135629838e64acc35f51210a29e99cb6f2c26b2a13514\" returns successfully" Mar 13 00:42:11.745060 containerd[1541]: time="2026-03-13T00:42:11.745006880Z" level=info msg="received container exit event container_id:\"bb3aeeec812ff4ae44e135629838e64acc35f51210a29e99cb6f2c26b2a13514\" id:\"bb3aeeec812ff4ae44e135629838e64acc35f51210a29e99cb6f2c26b2a13514\" pid:4749 exited_at:{seconds:1773362531 nanos:744569784}" Mar 13 00:42:11.782205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb3aeeec812ff4ae44e135629838e64acc35f51210a29e99cb6f2c26b2a13514-rootfs.mount: Deactivated successfully. Mar 13 00:42:12.551518 containerd[1541]: time="2026-03-13T00:42:12.551415898Z" level=info msg="CreateContainer within sandbox \"c2ba55a12ce4a984d2a22bf7fab4aace0bfcc41b14c69ee60667c6340593d760\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 13 00:42:12.569845 containerd[1541]: time="2026-03-13T00:42:12.569790192Z" level=info msg="Container ece1e69139f623ab05171aee3676164abd4b670953ed5a06f57b05712203f868: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:12.580288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159978482.mount: Deactivated successfully. Mar 13 00:42:12.587170 containerd[1541]: time="2026-03-13T00:42:12.587121123Z" level=info msg="CreateContainer within sandbox \"c2ba55a12ce4a984d2a22bf7fab4aace0bfcc41b14c69ee60667c6340593d760\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ece1e69139f623ab05171aee3676164abd4b670953ed5a06f57b05712203f868\"" Mar 13 00:42:12.590769 containerd[1541]: time="2026-03-13T00:42:12.588192075Z" level=info msg="StartContainer for \"ece1e69139f623ab05171aee3676164abd4b670953ed5a06f57b05712203f868\"" Mar 13 00:42:12.590769 containerd[1541]: time="2026-03-13T00:42:12.589643654Z" level=info msg="connecting to shim ece1e69139f623ab05171aee3676164abd4b670953ed5a06f57b05712203f868" address="unix:///run/containerd/s/7db894f40c1f30141f1e32fe9f74ddd4bd9171cc2926d52d0e588b526180c513" protocol=ttrpc version=3 Mar 13 00:42:12.624120 systemd[1]: Started cri-containerd-ece1e69139f623ab05171aee3676164abd4b670953ed5a06f57b05712203f868.scope - libcontainer container ece1e69139f623ab05171aee3676164abd4b670953ed5a06f57b05712203f868. Mar 13 00:42:12.668170 systemd[1]: cri-containerd-ece1e69139f623ab05171aee3676164abd4b670953ed5a06f57b05712203f868.scope: Deactivated successfully. Mar 13 00:42:12.670472 containerd[1541]: time="2026-03-13T00:42:12.670429153Z" level=info msg="received container exit event container_id:\"ece1e69139f623ab05171aee3676164abd4b670953ed5a06f57b05712203f868\" id:\"ece1e69139f623ab05171aee3676164abd4b670953ed5a06f57b05712203f868\" pid:4790 exited_at:{seconds:1773362532 nanos:670228346}" Mar 13 00:42:12.685831 containerd[1541]: time="2026-03-13T00:42:12.685780872Z" level=info msg="StartContainer for \"ece1e69139f623ab05171aee3676164abd4b670953ed5a06f57b05712203f868\" returns successfully" Mar 13 00:42:12.709288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ece1e69139f623ab05171aee3676164abd4b670953ed5a06f57b05712203f868-rootfs.mount: Deactivated successfully. Mar 13 00:42:13.090459 kubelet[2820]: E0313 00:42:13.090359 2820 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 13 00:42:13.574051 containerd[1541]: time="2026-03-13T00:42:13.573761449Z" level=info msg="CreateContainer within sandbox \"c2ba55a12ce4a984d2a22bf7fab4aace0bfcc41b14c69ee60667c6340593d760\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 13 00:42:13.593657 containerd[1541]: time="2026-03-13T00:42:13.592088637Z" level=info msg="Container 153f40671c41ad9b4b1b85b58489cd9959c0dd9286b3f873b2fe9a87c1c0c38d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:42:13.611149 containerd[1541]: time="2026-03-13T00:42:13.611077370Z" level=info msg="CreateContainer within sandbox \"c2ba55a12ce4a984d2a22bf7fab4aace0bfcc41b14c69ee60667c6340593d760\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"153f40671c41ad9b4b1b85b58489cd9959c0dd9286b3f873b2fe9a87c1c0c38d\"" Mar 13 00:42:13.612439 containerd[1541]: time="2026-03-13T00:42:13.612407316Z" level=info msg="StartContainer for \"153f40671c41ad9b4b1b85b58489cd9959c0dd9286b3f873b2fe9a87c1c0c38d\"" Mar 13 00:42:13.614337 containerd[1541]: time="2026-03-13T00:42:13.614275848Z" level=info msg="connecting to shim 153f40671c41ad9b4b1b85b58489cd9959c0dd9286b3f873b2fe9a87c1c0c38d" address="unix:///run/containerd/s/7db894f40c1f30141f1e32fe9f74ddd4bd9171cc2926d52d0e588b526180c513" protocol=ttrpc version=3 Mar 13 00:42:13.648104 systemd[1]: Started cri-containerd-153f40671c41ad9b4b1b85b58489cd9959c0dd9286b3f873b2fe9a87c1c0c38d.scope - libcontainer container 153f40671c41ad9b4b1b85b58489cd9959c0dd9286b3f873b2fe9a87c1c0c38d. Mar 13 00:42:13.720404 containerd[1541]: time="2026-03-13T00:42:13.720246028Z" level=info msg="StartContainer for \"153f40671c41ad9b4b1b85b58489cd9959c0dd9286b3f873b2fe9a87c1c0c38d\" returns successfully" Mar 13 00:42:14.221010 systemd[1]: Started sshd@37-10.128.0.75:22-194.107.115.2:55752.service - OpenSSH per-connection server daemon (194.107.115.2:55752). Mar 13 00:42:14.337838 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Mar 13 00:42:15.347178 sshd[4889]: Invalid user diego from 194.107.115.2 port 55752 Mar 13 00:42:15.550250 sshd[4889]: Received disconnect from 194.107.115.2 port 55752:11: Bye Bye [preauth] Mar 13 00:42:15.550250 sshd[4889]: Disconnected from invalid user diego 194.107.115.2 port 55752 [preauth] Mar 13 00:42:15.553916 systemd[1]: sshd@37-10.128.0.75:22-194.107.115.2:55752.service: Deactivated successfully. Mar 13 00:42:17.803680 containerd[1541]: time="2026-03-13T00:42:17.803378770Z" level=info msg="StopPodSandbox for \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\"" Mar 13 00:42:17.803680 containerd[1541]: time="2026-03-13T00:42:17.803586194Z" level=info msg="TearDown network for sandbox \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" successfully" Mar 13 00:42:17.803680 containerd[1541]: time="2026-03-13T00:42:17.803607446Z" level=info msg="StopPodSandbox for \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" returns successfully" Mar 13 00:42:17.805859 containerd[1541]: time="2026-03-13T00:42:17.805169965Z" level=info msg="RemovePodSandbox for \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\"" Mar 13 00:42:17.805859 containerd[1541]: time="2026-03-13T00:42:17.805216656Z" level=info msg="Forcibly stopping sandbox \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\"" Mar 13 00:42:17.805859 containerd[1541]: time="2026-03-13T00:42:17.805342839Z" level=info msg="TearDown network for sandbox \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" successfully" Mar 13 00:42:17.807624 containerd[1541]: time="2026-03-13T00:42:17.807568510Z" level=info msg="Ensure that sandbox c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3 in task-service has been cleanup successfully" Mar 13 00:42:17.814477 containerd[1541]: time="2026-03-13T00:42:17.814410345Z" level=info msg="RemovePodSandbox \"c1e44fdd63c8dd2f6197ab2b6820183cc0b11df66b1759ee6681c291852269b3\" returns successfully" Mar 13 00:42:17.815332 containerd[1541]: time="2026-03-13T00:42:17.815294719Z" level=info msg="StopPodSandbox for \"2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168\"" Mar 13 00:42:17.815702 containerd[1541]: time="2026-03-13T00:42:17.815463599Z" level=info msg="TearDown network for sandbox \"2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168\" successfully" Mar 13 00:42:17.815702 containerd[1541]: time="2026-03-13T00:42:17.815488243Z" level=info msg="StopPodSandbox for \"2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168\" returns successfully" Mar 13 00:42:17.816901 containerd[1541]: time="2026-03-13T00:42:17.816020146Z" level=info msg="RemovePodSandbox for \"2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168\"" Mar 13 00:42:17.816901 containerd[1541]: time="2026-03-13T00:42:17.816060754Z" level=info msg="Forcibly stopping sandbox \"2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168\"" Mar 13 00:42:17.816901 containerd[1541]: time="2026-03-13T00:42:17.816174195Z" level=info msg="TearDown network for sandbox \"2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168\" successfully" Mar 13 00:42:17.818452 containerd[1541]: time="2026-03-13T00:42:17.818412075Z" level=info msg="Ensure that sandbox 2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168 in task-service has been cleanup successfully" Mar 13 00:42:17.824887 containerd[1541]: time="2026-03-13T00:42:17.824829418Z" level=info msg="RemovePodSandbox \"2ca1326a0149436d61005520205e5567fa1e9f413313f684cbbf6f5181ef3168\" returns successfully" Mar 13 00:42:18.072462 systemd-networkd[1425]: lxc_health: Link UP Mar 13 00:42:18.078753 systemd-networkd[1425]: lxc_health: Gained carrier Mar 13 00:42:19.709031 systemd-networkd[1425]: lxc_health: Gained IPv6LL Mar 13 00:42:19.741845 kubelet[2820]: I0313 00:42:19.741766 2820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q589t" podStartSLOduration=10.741712447 podStartE2EDuration="10.741712447s" podCreationTimestamp="2026-03-13 00:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:42:14.636567247 +0000 UTC m=+117.053866955" watchObservedRunningTime="2026-03-13 00:42:19.741712447 +0000 UTC m=+122.159012153" Mar 13 00:42:22.133725 ntpd[1599]: Listen normally on 13 lxc_health [fe80::4073:3eff:fede:e335%14]:123 Mar 13 00:42:22.134686 ntpd[1599]: 13 Mar 00:42:22 ntpd[1599]: Listen normally on 13 lxc_health [fe80::4073:3eff:fede:e335%14]:123 Mar 13 00:42:25.652019 kubelet[2820]: E0313 00:42:25.651918 2820 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60290->127.0.0.1:43613: write tcp 127.0.0.1:60290->127.0.0.1:43613: write: broken pipe Mar 13 00:42:25.688874 sshd[4683]: Connection closed by 20.161.92.111 port 37880 Mar 13 00:42:25.690040 sshd-session[4633]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:25.698088 systemd[1]: sshd@36-10.128.0.75:22-20.161.92.111:37880.service: Deactivated successfully. Mar 13 00:42:25.702671 systemd[1]: session-27.scope: Deactivated successfully. Mar 13 00:42:25.706489 systemd-logind[1519]: Session 27 logged out. Waiting for processes to exit. Mar 13 00:42:25.709837 systemd-logind[1519]: Removed session 27. Mar 13 00:42:26.185521 systemd[1]: Started sshd@38-10.128.0.75:22-103.76.120.225:49698.service - OpenSSH per-connection server daemon (103.76.120.225:49698). Mar 13 00:42:27.637753 sshd[5513]: Received disconnect from 103.76.120.225 port 49698:11: Bye Bye [preauth] Mar 13 00:42:27.637753 sshd[5513]: Disconnected from authenticating user root 103.76.120.225 port 49698 [preauth] Mar 13 00:42:27.641074 systemd[1]: sshd@38-10.128.0.75:22-103.76.120.225:49698.service: Deactivated successfully.