May 15 00:06:19.090968 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 14 22:19:37 -00 2025 May 15 00:06:19.091016 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=23d816f2beca10c7a75ccdd203c170f89f29125f08ff6f3fdf90f8fa61b342cc May 15 00:06:19.091033 kernel: BIOS-provided physical RAM map: May 15 00:06:19.091047 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved May 15 00:06:19.091060 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable May 15 00:06:19.091075 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved May 15 00:06:19.091092 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable May 15 00:06:19.091107 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved May 15 00:06:19.091127 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd326fff] usable May 15 00:06:19.091215 kernel: BIOS-e820: [mem 0x00000000bd327000-0x00000000bd32efff] ACPI data May 15 00:06:19.091230 kernel: BIOS-e820: [mem 0x00000000bd32f000-0x00000000bf8ecfff] usable May 15 00:06:19.091245 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved May 15 00:06:19.091260 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data May 15 00:06:19.091274 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS May 15 00:06:19.091299 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable May 15 00:06:19.091315 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved May 15 00:06:19.091331 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable May 15 00:06:19.091347 kernel: NX (Execute Disable) protection: active May 15 00:06:19.091362 kernel: APIC: Static calls initialized May 15 00:06:19.091378 kernel: efi: EFI v2.7 by EDK II May 15 00:06:19.091394 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd327018 May 15 00:06:19.091410 kernel: random: crng init done May 15 00:06:19.091425 kernel: secureboot: Secure boot disabled May 15 00:06:19.091439 kernel: SMBIOS 2.4 present. May 15 00:06:19.091460 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 May 15 00:06:19.091475 kernel: Hypervisor detected: KVM May 15 00:06:19.091501 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 00:06:19.091517 kernel: kvm-clock: using sched offset of 13085702772 cycles May 15 00:06:19.091533 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 00:06:19.091548 kernel: tsc: Detected 2299.998 MHz processor May 15 00:06:19.091564 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 00:06:19.091582 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 00:06:19.091598 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 May 15 00:06:19.091613 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs May 15 00:06:19.091634 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 00:06:19.091650 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 May 15 00:06:19.091666 kernel: Using GB pages for direct mapping May 15 00:06:19.091683 kernel: ACPI: Early table checksum verification disabled May 15 00:06:19.091699 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) May 15 00:06:19.091715 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) May 15 00:06:19.091739 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) May 15 00:06:19.091760 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) May 15 00:06:19.091776 kernel: ACPI: FACS 0x00000000BFBF2000 000040 May 15 00:06:19.091794 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) May 15 00:06:19.091811 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) May 15 00:06:19.091828 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) May 15 00:06:19.091844 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) May 15 00:06:19.091859 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) May 15 00:06:19.091881 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) May 15 00:06:19.091898 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] May 15 00:06:19.091914 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] May 15 00:06:19.091931 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] May 15 00:06:19.091949 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] May 15 00:06:19.091966 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] May 15 00:06:19.091984 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] May 15 00:06:19.092000 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] May 15 00:06:19.092020 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] May 15 00:06:19.092037 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] May 15 00:06:19.092052 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 15 00:06:19.092073 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 15 00:06:19.092090 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 15 00:06:19.092107 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] May 15 00:06:19.092124 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] May 15 00:06:19.094172 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] May 15 00:06:19.094200 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] May 15 00:06:19.094224 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] May 15 00:06:19.094242 kernel: Zone ranges: May 15 00:06:19.094269 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 00:06:19.094286 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 15 00:06:19.094303 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] May 15 00:06:19.094320 kernel: Movable zone start for each node May 15 00:06:19.094337 kernel: Early memory node ranges May 15 00:06:19.094354 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] May 15 00:06:19.094369 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] May 15 00:06:19.094385 kernel: node 0: [mem 0x0000000000100000-0x00000000bd326fff] May 15 00:06:19.094407 kernel: node 0: [mem 0x00000000bd32f000-0x00000000bf8ecfff] May 15 00:06:19.094423 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] May 15 00:06:19.094440 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] May 15 00:06:19.094456 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] May 15 00:06:19.094473 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 00:06:19.094498 kernel: On node 0, zone DMA: 11 pages in unavailable ranges May 15 00:06:19.094515 kernel: On node 0, zone DMA: 104 pages in unavailable ranges May 15 00:06:19.094533 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges May 15 00:06:19.094550 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 15 00:06:19.094571 kernel: On node 0, zone Normal: 32 pages in unavailable ranges May 15 00:06:19.094588 kernel: ACPI: PM-Timer IO Port: 0xb008 May 15 00:06:19.094605 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 00:06:19.094623 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 00:06:19.094640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 00:06:19.094657 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 00:06:19.094673 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 00:06:19.094690 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 00:06:19.094707 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 00:06:19.094729 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 15 00:06:19.094746 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 15 00:06:19.094763 kernel: Booting paravirtualized kernel on KVM May 15 00:06:19.094780 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 00:06:19.094796 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 15 00:06:19.094813 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 15 00:06:19.094830 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 15 00:06:19.094846 kernel: pcpu-alloc: [0] 0 1 May 15 00:06:19.094862 kernel: kvm-guest: PV spinlocks enabled May 15 00:06:19.094885 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 00:06:19.094908 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=23d816f2beca10c7a75ccdd203c170f89f29125f08ff6f3fdf90f8fa61b342cc May 15 00:06:19.094927 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 00:06:19.094945 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 15 00:06:19.094963 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 00:06:19.094980 kernel: Fallback order for Node 0: 0 May 15 00:06:19.094996 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 May 15 00:06:19.095011 kernel: Policy zone: Normal May 15 00:06:19.095032 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 00:06:19.095049 kernel: software IO TLB: area num 2. May 15 00:06:19.095067 kernel: Memory: 7511320K/7860552K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 348976K reserved, 0K cma-reserved) May 15 00:06:19.095086 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 00:06:19.095103 kernel: Kernel/User page tables isolation: enabled May 15 00:06:19.095120 kernel: ftrace: allocating 37918 entries in 149 pages May 15 00:06:19.095153 kernel: ftrace: allocated 149 pages with 4 groups May 15 00:06:19.095171 kernel: Dynamic Preempt: voluntary May 15 00:06:19.095218 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 00:06:19.095240 kernel: rcu: RCU event tracing is enabled. May 15 00:06:19.095259 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 00:06:19.095277 kernel: Trampoline variant of Tasks RCU enabled. May 15 00:06:19.095300 kernel: Rude variant of Tasks RCU enabled. May 15 00:06:19.095319 kernel: Tracing variant of Tasks RCU enabled. May 15 00:06:19.095337 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 00:06:19.095356 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 00:06:19.095375 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 15 00:06:19.095398 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 00:06:19.095418 kernel: Console: colour dummy device 80x25 May 15 00:06:19.095437 kernel: printk: console [ttyS0] enabled May 15 00:06:19.095455 kernel: ACPI: Core revision 20230628 May 15 00:06:19.095472 kernel: APIC: Switch to symmetric I/O mode setup May 15 00:06:19.095502 kernel: x2apic enabled May 15 00:06:19.095521 kernel: APIC: Switched APIC routing to: physical x2apic May 15 00:06:19.095540 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 May 15 00:06:19.095559 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns May 15 00:06:19.095583 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) May 15 00:06:19.095602 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 May 15 00:06:19.095620 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 May 15 00:06:19.095639 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 00:06:19.095658 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 15 00:06:19.095674 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 15 00:06:19.095691 kernel: Spectre V2 : Mitigation: IBRS May 15 00:06:19.095709 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 00:06:19.095727 kernel: RETBleed: Mitigation: IBRS May 15 00:06:19.095748 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 00:06:19.095766 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl May 15 00:06:19.095785 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 00:06:19.095803 kernel: MDS: Mitigation: Clear CPU buffers May 15 00:06:19.095821 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 15 00:06:19.095839 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 00:06:19.095857 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 00:06:19.095875 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 00:06:19.095897 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 00:06:19.095915 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 15 00:06:19.095934 kernel: Freeing SMP alternatives memory: 32K May 15 00:06:19.095952 kernel: pid_max: default: 32768 minimum: 301 May 15 00:06:19.095971 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 00:06:19.095991 kernel: landlock: Up and running. May 15 00:06:19.096010 kernel: SELinux: Initializing. May 15 00:06:19.096029 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 15 00:06:19.096048 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 15 00:06:19.096070 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) May 15 00:06:19.096089 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 00:06:19.096108 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 00:06:19.096127 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 00:06:19.097203 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. May 15 00:06:19.097227 kernel: signal: max sigframe size: 1776 May 15 00:06:19.097245 kernel: rcu: Hierarchical SRCU implementation. May 15 00:06:19.097265 kernel: rcu: Max phase no-delay instances is 400. May 15 00:06:19.097284 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 15 00:06:19.097310 kernel: smp: Bringing up secondary CPUs ... May 15 00:06:19.097328 kernel: smpboot: x86: Booting SMP configuration: May 15 00:06:19.097346 kernel: .... node #0, CPUs: #1 May 15 00:06:19.097365 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 15 00:06:19.097384 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 15 00:06:19.097402 kernel: smp: Brought up 1 node, 2 CPUs May 15 00:06:19.097420 kernel: smpboot: Max logical packages: 1 May 15 00:06:19.097438 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) May 15 00:06:19.097455 kernel: devtmpfs: initialized May 15 00:06:19.097477 kernel: x86/mm: Memory block size: 128MB May 15 00:06:19.097502 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) May 15 00:06:19.097521 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 00:06:19.097540 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 00:06:19.097558 kernel: pinctrl core: initialized pinctrl subsystem May 15 00:06:19.097575 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 00:06:19.097593 kernel: audit: initializing netlink subsys (disabled) May 15 00:06:19.097611 kernel: audit: type=2000 audit(1747267577.988:1): state=initialized audit_enabled=0 res=1 May 15 00:06:19.097629 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 00:06:19.097651 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 00:06:19.097668 kernel: cpuidle: using governor menu May 15 00:06:19.097686 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 00:06:19.097704 kernel: dca service started, version 1.12.1 May 15 00:06:19.097722 kernel: PCI: Using configuration type 1 for base access May 15 00:06:19.097740 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 00:06:19.097758 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 00:06:19.097775 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 15 00:06:19.097797 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 00:06:19.097815 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 00:06:19.097833 kernel: ACPI: Added _OSI(Module Device) May 15 00:06:19.097851 kernel: ACPI: Added _OSI(Processor Device) May 15 00:06:19.097869 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 00:06:19.097888 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 00:06:19.097906 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 15 00:06:19.097924 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 15 00:06:19.097941 kernel: ACPI: Interpreter enabled May 15 00:06:19.097959 kernel: ACPI: PM: (supports S0 S3 S5) May 15 00:06:19.097981 kernel: ACPI: Using IOAPIC for interrupt routing May 15 00:06:19.097999 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 00:06:19.098017 kernel: PCI: Ignoring E820 reservations for host bridge windows May 15 00:06:19.098036 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F May 15 00:06:19.098054 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 00:06:19.098341 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 15 00:06:19.098548 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 15 00:06:19.098738 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 15 00:06:19.098762 kernel: PCI host bridge to bus 0000:00 May 15 00:06:19.098940 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 00:06:19.099108 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 00:06:19.102375 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 00:06:19.102567 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] May 15 00:06:19.102735 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 00:06:19.102965 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 15 00:06:19.103181 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 May 15 00:06:19.103376 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 15 00:06:19.103568 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 15 00:06:19.103759 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 May 15 00:06:19.103942 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] May 15 00:06:19.104132 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] May 15 00:06:19.106440 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 15 00:06:19.106662 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] May 15 00:06:19.106866 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] May 15 00:06:19.107087 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 May 15 00:06:19.107316 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] May 15 00:06:19.107537 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] May 15 00:06:19.107571 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 00:06:19.107592 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 00:06:19.107613 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 00:06:19.107633 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 00:06:19.107652 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 15 00:06:19.107672 kernel: iommu: Default domain type: Translated May 15 00:06:19.107690 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 00:06:19.107707 kernel: efivars: Registered efivars operations May 15 00:06:19.107723 kernel: PCI: Using ACPI for IRQ routing May 15 00:06:19.107747 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 00:06:19.107765 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] May 15 00:06:19.107782 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] May 15 00:06:19.107800 kernel: e820: reserve RAM buffer [mem 0xbd327000-0xbfffffff] May 15 00:06:19.107816 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] May 15 00:06:19.107831 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] May 15 00:06:19.107848 kernel: vgaarb: loaded May 15 00:06:19.107865 kernel: clocksource: Switched to clocksource kvm-clock May 15 00:06:19.107884 kernel: VFS: Disk quotas dquot_6.6.0 May 15 00:06:19.107909 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 00:06:19.107928 kernel: pnp: PnP ACPI init May 15 00:06:19.107947 kernel: pnp: PnP ACPI: found 7 devices May 15 00:06:19.107966 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 00:06:19.107985 kernel: NET: Registered PF_INET protocol family May 15 00:06:19.108004 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 15 00:06:19.108024 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 15 00:06:19.108042 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 00:06:19.108058 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 00:06:19.108080 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 15 00:06:19.108097 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 15 00:06:19.108115 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 15 00:06:19.108133 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 15 00:06:19.110196 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 00:06:19.110219 kernel: NET: Registered PF_XDP protocol family May 15 00:06:19.110428 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 00:06:19.110616 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 00:06:19.110793 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 00:06:19.110968 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] May 15 00:06:19.111192 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 15 00:06:19.111218 kernel: PCI: CLS 0 bytes, default 64 May 15 00:06:19.111238 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 15 00:06:19.111257 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) May 15 00:06:19.111277 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 15 00:06:19.111303 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns May 15 00:06:19.111322 kernel: clocksource: Switched to clocksource tsc May 15 00:06:19.111342 kernel: Initialise system trusted keyrings May 15 00:06:19.111361 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 15 00:06:19.111379 kernel: Key type asymmetric registered May 15 00:06:19.111396 kernel: Asymmetric key parser 'x509' registered May 15 00:06:19.111414 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 15 00:06:19.111431 kernel: io scheduler mq-deadline registered May 15 00:06:19.111459 kernel: io scheduler kyber registered May 15 00:06:19.111477 kernel: io scheduler bfq registered May 15 00:06:19.111499 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 00:06:19.111517 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 15 00:06:19.111729 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver May 15 00:06:19.111755 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 May 15 00:06:19.111941 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver May 15 00:06:19.111964 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 15 00:06:19.115437 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver May 15 00:06:19.115472 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 00:06:19.115498 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 00:06:19.115517 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 15 00:06:19.115536 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A May 15 00:06:19.115554 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A May 15 00:06:19.115777 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) May 15 00:06:19.115803 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 00:06:19.115821 kernel: i8042: Warning: Keylock active May 15 00:06:19.115840 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 00:06:19.115864 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 00:06:19.116049 kernel: rtc_cmos 00:00: RTC can wake from S4 May 15 00:06:19.116242 kernel: rtc_cmos 00:00: registered as rtc0 May 15 00:06:19.116420 kernel: rtc_cmos 00:00: setting system clock to 2025-05-15T00:06:18 UTC (1747267578) May 15 00:06:19.116589 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 15 00:06:19.116611 kernel: intel_pstate: CPU model not supported May 15 00:06:19.116630 kernel: pstore: Using crash dump compression: deflate May 15 00:06:19.116648 kernel: pstore: Registered efi_pstore as persistent store backend May 15 00:06:19.116672 kernel: NET: Registered PF_INET6 protocol family May 15 00:06:19.116690 kernel: Segment Routing with IPv6 May 15 00:06:19.116716 kernel: In-situ OAM (IOAM) with IPv6 May 15 00:06:19.116735 kernel: NET: Registered PF_PACKET protocol family May 15 00:06:19.116753 kernel: Key type dns_resolver registered May 15 00:06:19.116771 kernel: IPI shorthand broadcast: enabled May 15 00:06:19.116790 kernel: sched_clock: Marking stable (873004115, 152740630)->(1054048050, -28303305) May 15 00:06:19.116808 kernel: registered taskstats version 1 May 15 00:06:19.116827 kernel: Loading compiled-in X.509 certificates May 15 00:06:19.116849 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: e21d6dc0691a7e1e8bef90d9217bc8c09d6860f3' May 15 00:06:19.116867 kernel: Key type .fscrypt registered May 15 00:06:19.116885 kernel: Key type fscrypt-provisioning registered May 15 00:06:19.116903 kernel: ima: Allocated hash algorithm: sha1 May 15 00:06:19.116921 kernel: ima: No architecture policies found May 15 00:06:19.116939 kernel: clk: Disabling unused clocks May 15 00:06:19.116957 kernel: Freeing unused kernel image (initmem) memory: 43484K May 15 00:06:19.116976 kernel: Write protecting the kernel read-only data: 38912k May 15 00:06:19.116995 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 15 00:06:19.117017 kernel: Run /init as init process May 15 00:06:19.117035 kernel: with arguments: May 15 00:06:19.117053 kernel: /init May 15 00:06:19.117071 kernel: with environment: May 15 00:06:19.117088 kernel: HOME=/ May 15 00:06:19.117106 kernel: TERM=linux May 15 00:06:19.117124 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 00:06:19.117157 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 00:06:19.117178 systemd[1]: Successfully made /usr/ read-only. May 15 00:06:19.117206 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 00:06:19.117226 systemd[1]: Detected virtualization google. May 15 00:06:19.117245 systemd[1]: Detected architecture x86-64. May 15 00:06:19.117263 systemd[1]: Running in initrd. May 15 00:06:19.117282 systemd[1]: No hostname configured, using default hostname. May 15 00:06:19.117302 systemd[1]: Hostname set to . May 15 00:06:19.117320 systemd[1]: Initializing machine ID from random generator. May 15 00:06:19.117343 systemd[1]: Queued start job for default target initrd.target. May 15 00:06:19.117362 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:06:19.117382 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:06:19.117402 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 00:06:19.117421 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:06:19.117441 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 00:06:19.117462 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 00:06:19.117487 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 00:06:19.117526 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 00:06:19.117550 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:06:19.117570 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:06:19.117590 systemd[1]: Reached target paths.target - Path Units. May 15 00:06:19.117614 systemd[1]: Reached target slices.target - Slice Units. May 15 00:06:19.117634 systemd[1]: Reached target swap.target - Swaps. May 15 00:06:19.117654 systemd[1]: Reached target timers.target - Timer Units. May 15 00:06:19.117672 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:06:19.117690 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:06:19.117715 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 00:06:19.117735 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 00:06:19.117755 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:06:19.117775 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:06:19.117798 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:06:19.117819 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:06:19.117838 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 00:06:19.117858 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:06:19.117878 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 00:06:19.117898 systemd[1]: Starting systemd-fsck-usr.service... May 15 00:06:19.117918 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:06:19.117938 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:06:19.117994 systemd-journald[184]: Collecting audit messages is disabled. May 15 00:06:19.118040 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:06:19.118060 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 00:06:19.118080 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:06:19.118105 systemd[1]: Finished systemd-fsck-usr.service. May 15 00:06:19.118125 systemd-journald[184]: Journal started May 15 00:06:19.118181 systemd-journald[184]: Runtime Journal (/run/log/journal/3d24577745bc4b588002d103247c66cb) is 8M, max 148.6M, 140.6M free. May 15 00:06:19.121792 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:06:19.123914 systemd-modules-load[186]: Inserted module 'overlay' May 15 00:06:19.128277 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:06:19.132977 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:06:19.141697 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:06:19.158381 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:06:19.164245 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 00:06:19.163355 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:06:19.173944 kernel: Bridge firewalling registered May 15 00:06:19.173182 systemd-modules-load[186]: Inserted module 'br_netfilter' May 15 00:06:19.176767 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:06:19.178081 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:06:19.192830 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:06:19.193721 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:06:19.202314 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:06:19.217362 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 00:06:19.218481 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:06:19.230699 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:06:19.233354 dracut-cmdline[217]: dracut-dracut-053 May 15 00:06:19.238254 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=23d816f2beca10c7a75ccdd203c170f89f29125f08ff6f3fdf90f8fa61b342cc May 15 00:06:19.242859 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:06:19.303275 systemd-resolved[229]: Positive Trust Anchors: May 15 00:06:19.303294 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:06:19.303367 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:06:19.309134 systemd-resolved[229]: Defaulting to hostname 'linux'. May 15 00:06:19.312229 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:06:19.319721 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:06:19.354181 kernel: SCSI subsystem initialized May 15 00:06:19.365192 kernel: Loading iSCSI transport class v2.0-870. May 15 00:06:19.377190 kernel: iscsi: registered transport (tcp) May 15 00:06:19.401077 kernel: iscsi: registered transport (qla4xxx) May 15 00:06:19.401184 kernel: QLogic iSCSI HBA Driver May 15 00:06:19.453992 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 00:06:19.458356 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 00:06:19.504188 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 00:06:19.504289 kernel: device-mapper: uevent: version 1.0.3 May 15 00:06:19.504318 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 00:06:19.551240 kernel: raid6: avx2x4 gen() 17641 MB/s May 15 00:06:19.568240 kernel: raid6: avx2x2 gen() 17670 MB/s May 15 00:06:19.585719 kernel: raid6: avx2x1 gen() 13781 MB/s May 15 00:06:19.585775 kernel: raid6: using algorithm avx2x2 gen() 17670 MB/s May 15 00:06:19.603946 kernel: raid6: .... xor() 18589 MB/s, rmw enabled May 15 00:06:19.604034 kernel: raid6: using avx2x2 recovery algorithm May 15 00:06:19.627199 kernel: xor: automatically using best checksumming function avx May 15 00:06:19.794191 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 00:06:19.807263 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 00:06:19.816394 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:06:19.847999 systemd-udevd[403]: Using default interface naming scheme 'v255'. May 15 00:06:19.856214 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:06:19.869464 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 00:06:19.899307 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation May 15 00:06:19.937616 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:06:19.949407 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:06:20.047046 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:06:20.064550 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 00:06:20.121390 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 00:06:20.124106 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:06:20.156306 kernel: cryptd: max_cpu_qlen set to 1000 May 15 00:06:20.168944 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:06:20.214325 kernel: AVX2 version of gcm_enc/dec engaged. May 15 00:06:20.214368 kernel: AES CTR mode by8 optimization enabled May 15 00:06:20.190282 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:06:20.206972 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 00:06:20.252576 kernel: scsi host0: Virtio SCSI HBA May 15 00:06:20.295698 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:06:20.356304 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 May 15 00:06:20.295902 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:06:20.395389 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) May 15 00:06:20.395673 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks May 15 00:06:20.395852 kernel: sd 0:0:1:0: [sda] Write Protect is off May 15 00:06:20.396021 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 May 15 00:06:20.396224 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 15 00:06:20.323053 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:06:20.452314 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 00:06:20.452358 kernel: GPT:17805311 != 25165823 May 15 00:06:20.452381 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 00:06:20.452413 kernel: GPT:17805311 != 25165823 May 15 00:06:20.452437 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 00:06:20.452464 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 00:06:20.452488 kernel: sd 0:0:1:0: [sda] Attached SCSI disk May 15 00:06:20.334271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:06:20.334636 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:06:20.346658 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:06:20.443580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:06:20.465462 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 00:06:20.523305 kernel: BTRFS: device fsid 11358d57-dfa4-4197-9524-595753ed5512 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (454) May 15 00:06:20.535189 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (455) May 15 00:06:20.541359 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:06:20.575006 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. May 15 00:06:20.598728 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. May 15 00:06:20.609118 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. May 15 00:06:20.630326 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. May 15 00:06:20.663775 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. May 15 00:06:20.683353 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 00:06:20.702160 disk-uuid[543]: Primary Header is updated. May 15 00:06:20.702160 disk-uuid[543]: Secondary Entries is updated. May 15 00:06:20.702160 disk-uuid[543]: Secondary Header is updated. May 15 00:06:20.740935 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 00:06:20.721343 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:06:20.775530 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:06:21.752266 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 00:06:21.752353 disk-uuid[544]: The operation has completed successfully. May 15 00:06:21.830759 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 00:06:21.830908 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 00:06:21.898394 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 00:06:21.926433 sh[567]: Success May 15 00:06:21.950177 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 15 00:06:22.045035 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 00:06:22.051716 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 00:06:22.077745 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 00:06:22.120640 kernel: BTRFS info (device dm-0): first mount of filesystem 11358d57-dfa4-4197-9524-595753ed5512 May 15 00:06:22.120751 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 00:06:22.120778 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 00:06:22.130263 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 00:06:22.142905 kernel: BTRFS info (device dm-0): using free space tree May 15 00:06:22.174180 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 15 00:06:22.179028 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 00:06:22.188166 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 00:06:22.194393 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 00:06:22.265505 kernel: BTRFS info (device sda6): first mount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 15 00:06:22.265551 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:06:22.265575 kernel: BTRFS info (device sda6): using free space tree May 15 00:06:22.265599 kernel: BTRFS info (device sda6): enabling ssd optimizations May 15 00:06:22.265621 kernel: BTRFS info (device sda6): auto enabling async discard May 15 00:06:22.244498 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 00:06:22.288345 kernel: BTRFS info (device sda6): last unmount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 15 00:06:22.300067 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 00:06:22.323431 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 00:06:22.371486 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:06:22.396478 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:06:22.494226 systemd-networkd[748]: lo: Link UP May 15 00:06:22.494643 systemd-networkd[748]: lo: Gained carrier May 15 00:06:22.498208 systemd-networkd[748]: Enumeration completed May 15 00:06:22.498442 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:06:22.499315 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:06:22.533545 ignition[697]: Ignition 2.20.0 May 15 00:06:22.499323 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:06:22.533555 ignition[697]: Stage: fetch-offline May 15 00:06:22.501568 systemd-networkd[748]: eth0: Link UP May 15 00:06:22.533598 ignition[697]: no configs at "/usr/lib/ignition/base.d" May 15 00:06:22.501576 systemd-networkd[748]: eth0: Gained carrier May 15 00:06:22.533608 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 15 00:06:22.501592 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:06:22.533722 ignition[697]: parsed url from cmdline: "" May 15 00:06:22.517272 systemd-networkd[748]: eth0: Overlong DHCP hostname received, shortened from 'ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826.c.flatcar-212911.internal' to 'ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826' May 15 00:06:22.533729 ignition[697]: no config URL provided May 15 00:06:22.517295 systemd-networkd[748]: eth0: DHCPv4 address 10.128.0.108/32, gateway 10.128.0.1 acquired from 169.254.169.254 May 15 00:06:22.533738 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:06:22.536680 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:06:22.533752 ignition[697]: no config at "/usr/lib/ignition/user.ign" May 15 00:06:22.567028 systemd[1]: Reached target network.target - Network. May 15 00:06:22.533761 ignition[697]: failed to fetch config: resource requires networking May 15 00:06:22.587535 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 00:06:22.534268 ignition[697]: Ignition finished successfully May 15 00:06:22.624396 unknown[757]: fetched base config from "system" May 15 00:06:22.612764 ignition[757]: Ignition 2.20.0 May 15 00:06:22.624410 unknown[757]: fetched base config from "system" May 15 00:06:22.612776 ignition[757]: Stage: fetch May 15 00:06:22.624421 unknown[757]: fetched user config from "gcp" May 15 00:06:22.613015 ignition[757]: no configs at "/usr/lib/ignition/base.d" May 15 00:06:22.626815 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 00:06:22.613027 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 15 00:06:22.646365 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 00:06:22.613174 ignition[757]: parsed url from cmdline: "" May 15 00:06:22.700750 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 00:06:22.613180 ignition[757]: no config URL provided May 15 00:06:22.725362 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 00:06:22.613186 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:06:22.759788 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 00:06:22.613198 ignition[757]: no config at "/usr/lib/ignition/user.ign" May 15 00:06:22.768185 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 00:06:22.613228 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 May 15 00:06:22.795456 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 00:06:22.617436 ignition[757]: GET result: OK May 15 00:06:22.805572 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:06:22.617538 ignition[757]: parsing config with SHA512: 7b9400db45fa811d0017e01f4f099bfaf98c8731197f3b2ba3df904b6de436214728e1c419d7cccc237205159379d5b49efbcf2265d80c43fc37bb25c1e759ee May 15 00:06:22.833489 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:06:22.624982 ignition[757]: fetch: fetch complete May 15 00:06:22.851492 systemd[1]: Reached target basic.target - Basic System. May 15 00:06:22.624989 ignition[757]: fetch: fetch passed May 15 00:06:22.878411 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 00:06:22.625042 ignition[757]: Ignition finished successfully May 15 00:06:22.670021 ignition[764]: Ignition 2.20.0 May 15 00:06:22.670031 ignition[764]: Stage: kargs May 15 00:06:22.670273 ignition[764]: no configs at "/usr/lib/ignition/base.d" May 15 00:06:22.670286 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 15 00:06:22.671375 ignition[764]: kargs: kargs passed May 15 00:06:22.671429 ignition[764]: Ignition finished successfully May 15 00:06:22.748548 ignition[769]: Ignition 2.20.0 May 15 00:06:22.748562 ignition[769]: Stage: disks May 15 00:06:22.748752 ignition[769]: no configs at "/usr/lib/ignition/base.d" May 15 00:06:22.748764 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 15 00:06:22.749763 ignition[769]: disks: disks passed May 15 00:06:22.749817 ignition[769]: Ignition finished successfully May 15 00:06:22.949096 systemd-fsck[778]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 15 00:06:23.125184 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 00:06:23.130292 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 00:06:23.282183 kernel: EXT4-fs (sda9): mounted filesystem 36fdaeac-383d-468b-a0a4-9f47e3957a15 r/w with ordered data mode. Quota mode: none. May 15 00:06:23.282990 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 00:06:23.283844 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 00:06:23.315287 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:06:23.326297 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 00:06:23.350715 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 00:06:23.400335 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (786) May 15 00:06:23.400375 kernel: BTRFS info (device sda6): first mount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 15 00:06:23.400392 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:06:23.400407 kernel: BTRFS info (device sda6): using free space tree May 15 00:06:23.350818 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 00:06:23.442344 kernel: BTRFS info (device sda6): enabling ssd optimizations May 15 00:06:23.442390 kernel: BTRFS info (device sda6): auto enabling async discard May 15 00:06:23.350989 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:06:23.426600 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:06:23.450581 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 00:06:23.466380 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 00:06:23.610096 initrd-setup-root[810]: cut: /sysroot/etc/passwd: No such file or directory May 15 00:06:23.620331 initrd-setup-root[817]: cut: /sysroot/etc/group: No such file or directory May 15 00:06:23.630285 initrd-setup-root[824]: cut: /sysroot/etc/shadow: No such file or directory May 15 00:06:23.640289 initrd-setup-root[831]: cut: /sysroot/etc/gshadow: No such file or directory May 15 00:06:23.786512 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 00:06:23.792413 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 00:06:23.821431 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 00:06:23.845614 kernel: BTRFS info (device sda6): last unmount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 15 00:06:23.846595 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 00:06:23.889295 ignition[902]: INFO : Ignition 2.20.0 May 15 00:06:23.889295 ignition[902]: INFO : Stage: mount May 15 00:06:23.889295 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:06:23.889295 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 15 00:06:23.946339 ignition[902]: INFO : mount: mount passed May 15 00:06:23.946339 ignition[902]: INFO : Ignition finished successfully May 15 00:06:23.892594 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 00:06:23.898878 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 00:06:24.021344 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (914) May 15 00:06:24.021391 kernel: BTRFS info (device sda6): first mount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 15 00:06:24.021431 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:06:24.021455 kernel: BTRFS info (device sda6): using free space tree May 15 00:06:24.021479 kernel: BTRFS info (device sda6): enabling ssd optimizations May 15 00:06:23.916341 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 00:06:24.044480 kernel: BTRFS info (device sda6): auto enabling async discard May 15 00:06:23.966437 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:06:24.036752 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:06:24.077263 ignition[930]: INFO : Ignition 2.20.0 May 15 00:06:24.077263 ignition[930]: INFO : Stage: files May 15 00:06:24.092294 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:06:24.092294 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 15 00:06:24.092294 ignition[930]: DEBUG : files: compiled without relabeling support, skipping May 15 00:06:24.092294 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 00:06:24.092294 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 00:06:24.092294 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 00:06:24.092294 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 00:06:24.092294 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 00:06:24.090987 unknown[930]: wrote ssh authorized keys file for user: core May 15 00:06:24.193338 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 00:06:24.193338 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 15 00:06:24.309376 systemd-networkd[748]: eth0: Gained IPv6LL May 15 00:06:25.309669 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 00:06:25.706015 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 00:06:25.706015 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:06:25.738321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 00:06:26.006111 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 00:06:26.154255 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:06:26.169297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 00:06:26.169297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 00:06:26.169297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 00:06:26.169297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 00:06:26.169297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:06:26.169297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:06:26.169297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:06:26.169297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:06:26.169297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:06:26.169297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:06:26.169297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 00:06:26.169297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 00:06:26.169297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 00:06:26.169297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 15 00:06:26.437110 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 00:06:26.786029 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 00:06:26.786029 ignition[930]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 00:06:26.823418 ignition[930]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:06:26.823418 ignition[930]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:06:26.823418 ignition[930]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 00:06:26.823418 ignition[930]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 15 00:06:26.823418 ignition[930]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 15 00:06:26.823418 ignition[930]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 00:06:26.823418 ignition[930]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 00:06:26.823418 ignition[930]: INFO : files: files passed May 15 00:06:26.823418 ignition[930]: INFO : Ignition finished successfully May 15 00:06:26.790192 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 00:06:26.809417 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 00:06:26.838381 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 00:06:26.889910 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 00:06:27.036339 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:06:27.036339 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 00:06:26.890035 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 00:06:27.074331 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:06:26.961740 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:06:26.977614 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 00:06:27.001412 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 00:06:27.077664 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 00:06:27.077784 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 00:06:27.101280 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 00:06:27.121471 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 00:06:27.141554 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 00:06:27.149383 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 00:06:27.194196 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:06:27.213395 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 00:06:27.250761 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 00:06:27.262612 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:06:27.273773 systemd[1]: Stopped target timers.target - Timer Units. May 15 00:06:27.302642 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 00:06:27.302885 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:06:27.332719 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 00:06:27.341716 systemd[1]: Stopped target basic.target - Basic System. May 15 00:06:27.358707 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 00:06:27.376703 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:06:27.394707 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 00:06:27.411722 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 00:06:27.429723 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:06:27.447690 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 00:06:27.466723 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 00:06:27.494667 systemd[1]: Stopped target swap.target - Swaps. May 15 00:06:27.506646 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 00:06:27.506871 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 00:06:27.538660 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 00:06:27.555618 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:06:27.563587 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 00:06:27.563762 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:06:27.581618 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 00:06:27.581825 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 00:06:27.619662 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 00:06:27.619898 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:06:27.706318 ignition[983]: INFO : Ignition 2.20.0 May 15 00:06:27.706318 ignition[983]: INFO : Stage: umount May 15 00:06:27.706318 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:06:27.706318 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 15 00:06:27.706318 ignition[983]: INFO : umount: umount passed May 15 00:06:27.706318 ignition[983]: INFO : Ignition finished successfully May 15 00:06:27.628683 systemd[1]: ignition-files.service: Deactivated successfully. May 15 00:06:27.628864 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 00:06:27.657391 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 00:06:27.702419 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 00:06:27.730509 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 00:06:27.730750 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:06:27.747690 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 00:06:27.747873 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:06:27.785353 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 00:06:27.786728 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 00:06:27.786872 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 00:06:27.802100 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 00:06:27.802258 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 00:06:27.823901 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 00:06:27.824024 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 00:06:27.843895 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 00:06:27.843962 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 00:06:27.849533 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 00:06:27.849618 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 00:06:27.866540 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 00:06:27.866615 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 00:06:27.884594 systemd[1]: Stopped target network.target - Network. May 15 00:06:27.901507 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 00:06:27.901607 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:06:27.918571 systemd[1]: Stopped target paths.target - Path Units. May 15 00:06:27.935500 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 00:06:27.939253 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:06:27.950500 systemd[1]: Stopped target slices.target - Slice Units. May 15 00:06:27.977323 systemd[1]: Stopped target sockets.target - Socket Units. May 15 00:06:27.985566 systemd[1]: iscsid.socket: Deactivated successfully. May 15 00:06:27.985629 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:06:28.011397 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 00:06:28.011475 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:06:28.029381 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 00:06:28.029499 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 00:06:28.050407 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 00:06:28.050496 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 00:06:28.068396 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 00:06:28.068493 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 00:06:28.086569 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 00:06:28.104528 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 00:06:28.122895 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 00:06:28.123032 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 00:06:28.133887 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 00:06:28.134177 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 00:06:28.134317 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 00:06:28.158833 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 00:06:28.160261 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 00:06:28.160353 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 00:06:28.182305 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 00:06:28.186439 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 00:06:28.186523 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:06:28.213558 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:06:28.213638 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:06:28.232679 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 00:06:28.707304 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). May 15 00:06:28.232749 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 00:06:28.263514 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 00:06:28.263612 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:06:28.282642 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:06:28.303864 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 00:06:28.303961 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 00:06:28.304473 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 00:06:28.304643 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:06:28.317408 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 00:06:28.317478 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 00:06:28.347411 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 00:06:28.347474 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:06:28.367361 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 00:06:28.367476 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 00:06:28.393314 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 00:06:28.393438 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 00:06:28.423285 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:06:28.423415 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:06:28.460363 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 00:06:28.484296 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 00:06:28.484431 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:06:28.504610 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 00:06:28.504683 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:06:28.525401 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 00:06:28.525494 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:06:28.547404 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:06:28.547520 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:06:28.567701 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 00:06:28.567804 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 00:06:28.568400 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 00:06:28.568529 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 00:06:28.587717 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 00:06:28.587840 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 00:06:28.610646 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 00:06:28.624538 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 00:06:28.651529 systemd[1]: Switching root. May 15 00:06:29.080329 systemd-journald[184]: Journal stopped May 15 00:06:31.728689 kernel: SELinux: policy capability network_peer_controls=1 May 15 00:06:31.728740 kernel: SELinux: policy capability open_perms=1 May 15 00:06:31.728762 kernel: SELinux: policy capability extended_socket_class=1 May 15 00:06:31.728784 kernel: SELinux: policy capability always_check_network=0 May 15 00:06:31.728801 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 00:06:31.728819 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 00:06:31.728840 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 00:06:31.728858 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 00:06:31.728880 kernel: audit: type=1403 audit(1747267589.368:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 00:06:31.728901 systemd[1]: Successfully loaded SELinux policy in 92.201ms. May 15 00:06:31.728924 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.631ms. May 15 00:06:31.728947 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 00:06:31.728970 systemd[1]: Detected virtualization google. May 15 00:06:31.728989 systemd[1]: Detected architecture x86-64. May 15 00:06:31.729014 systemd[1]: Detected first boot. May 15 00:06:31.729036 systemd[1]: Initializing machine ID from random generator. May 15 00:06:31.729057 zram_generator::config[1026]: No configuration found. May 15 00:06:31.729078 kernel: Guest personality initialized and is inactive May 15 00:06:31.729097 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 00:06:31.729120 kernel: Initialized host personality May 15 00:06:31.729150 kernel: NET: Registered PF_VSOCK protocol family May 15 00:06:31.729176 systemd[1]: Populated /etc with preset unit settings. May 15 00:06:31.729198 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 00:06:31.729218 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 00:06:31.729238 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 00:06:31.729258 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 00:06:31.729279 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 00:06:31.729299 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 00:06:31.729325 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 00:06:31.729346 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 00:06:31.729367 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 00:06:31.729389 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 00:06:31.729410 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 00:06:31.729432 systemd[1]: Created slice user.slice - User and Session Slice. May 15 00:06:31.729453 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:06:31.729478 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:06:31.729500 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 00:06:31.729521 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 00:06:31.729542 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 00:06:31.729564 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:06:31.729590 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 00:06:31.729612 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:06:31.729634 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 00:06:31.729659 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 00:06:31.729681 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 00:06:31.729702 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 00:06:31.729724 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:06:31.729745 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:06:31.729767 systemd[1]: Reached target slices.target - Slice Units. May 15 00:06:31.729788 systemd[1]: Reached target swap.target - Swaps. May 15 00:06:31.729810 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 00:06:31.729835 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 00:06:31.729856 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 00:06:31.729877 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:06:31.729901 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:06:31.729925 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:06:31.729944 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 00:06:31.729966 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 00:06:31.729988 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 00:06:31.730009 systemd[1]: Mounting media.mount - External Media Directory... May 15 00:06:31.730030 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:06:31.730052 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 00:06:31.730074 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 00:06:31.730101 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 00:06:31.730125 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 00:06:31.730163 systemd[1]: Reached target machines.target - Containers. May 15 00:06:31.730193 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 00:06:31.730215 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:06:31.730237 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:06:31.730260 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 00:06:31.730283 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:06:31.730308 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:06:31.730337 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:06:31.730361 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 00:06:31.730384 kernel: fuse: init (API version 7.39) May 15 00:06:31.730408 kernel: ACPI: bus type drm_connector registered May 15 00:06:31.730429 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:06:31.730454 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 00:06:31.730478 kernel: loop: module loaded May 15 00:06:31.730504 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 00:06:31.730527 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 00:06:31.730551 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 00:06:31.730574 systemd[1]: Stopped systemd-fsck-usr.service. May 15 00:06:31.730599 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:06:31.730623 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:06:31.730647 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:06:31.730668 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 00:06:31.730731 systemd-journald[1114]: Collecting audit messages is disabled. May 15 00:06:31.730774 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 00:06:31.730794 systemd-journald[1114]: Journal started May 15 00:06:31.730957 systemd-journald[1114]: Runtime Journal (/run/log/journal/a8624d9165cc4ab08193d146d2ee34ee) is 8M, max 148.6M, 140.6M free. May 15 00:06:30.433750 systemd[1]: Queued start job for default target multi-user.target. May 15 00:06:30.444979 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 15 00:06:30.445643 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 00:06:31.769196 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 00:06:31.795221 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:06:31.817562 systemd[1]: verity-setup.service: Deactivated successfully. May 15 00:06:31.817670 systemd[1]: Stopped verity-setup.service. May 15 00:06:31.843269 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:06:31.855198 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:06:31.865796 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 00:06:31.875503 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 00:06:31.885574 systemd[1]: Mounted media.mount - External Media Directory. May 15 00:06:31.895537 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 00:06:31.905524 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 00:06:31.915484 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 00:06:31.925693 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 00:06:31.937751 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:06:31.949676 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 00:06:31.949969 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 00:06:31.961721 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:06:31.962033 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:06:31.973751 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:06:31.974049 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:06:31.984710 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:06:31.985007 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:06:31.996759 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 00:06:31.997059 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 00:06:32.007774 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:06:32.008065 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:06:32.018788 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:06:32.029697 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 00:06:32.041772 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 00:06:32.053779 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 00:06:32.065785 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:06:32.090327 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 00:06:32.105304 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 00:06:32.123318 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 00:06:32.133318 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 00:06:32.133566 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:06:32.144605 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 00:06:32.162430 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 00:06:32.164039 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 00:06:32.183537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:06:32.192580 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 00:06:32.210905 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 00:06:32.226354 systemd-journald[1114]: Time spent on flushing to /var/log/journal/a8624d9165cc4ab08193d146d2ee34ee is 115.243ms for 945 entries. May 15 00:06:32.226354 systemd-journald[1114]: System Journal (/var/log/journal/a8624d9165cc4ab08193d146d2ee34ee) is 8M, max 584.8M, 576.8M free. May 15 00:06:32.372129 systemd-journald[1114]: Received client request to flush runtime journal. May 15 00:06:32.372237 kernel: loop0: detected capacity change from 0 to 147912 May 15 00:06:32.222361 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:06:32.235507 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 00:06:32.246368 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:06:32.259431 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:06:32.287395 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 00:06:32.306396 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:06:32.331405 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 00:06:32.360415 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 00:06:32.371564 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 00:06:32.384041 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 00:06:32.395964 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 00:06:32.407855 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 00:06:32.418264 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:06:32.424896 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. May 15 00:06:32.424931 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. May 15 00:06:32.445497 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:06:32.459948 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 00:06:32.478905 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 00:06:32.486355 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 00:06:32.509292 kernel: loop1: detected capacity change from 0 to 210664 May 15 00:06:32.509774 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 00:06:32.518882 udevadm[1153]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 00:06:32.540897 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 00:06:32.544153 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 00:06:32.598912 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 00:06:32.620486 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:06:32.672379 kernel: loop2: detected capacity change from 0 to 138176 May 15 00:06:32.695604 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. May 15 00:06:32.696694 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. May 15 00:06:32.710874 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:06:32.791193 kernel: loop3: detected capacity change from 0 to 52152 May 15 00:06:32.961601 kernel: loop4: detected capacity change from 0 to 147912 May 15 00:06:33.035748 kernel: loop5: detected capacity change from 0 to 210664 May 15 00:06:33.092183 kernel: loop6: detected capacity change from 0 to 138176 May 15 00:06:33.155305 kernel: loop7: detected capacity change from 0 to 52152 May 15 00:06:33.178029 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. May 15 00:06:33.179104 (sd-merge)[1177]: Merged extensions into '/usr'. May 15 00:06:33.188788 systemd[1]: Reload requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... May 15 00:06:33.189431 systemd[1]: Reloading... May 15 00:06:33.347207 zram_generator::config[1201]: No configuration found. May 15 00:06:33.419787 ldconfig[1145]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 00:06:33.580376 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:06:33.687125 systemd[1]: Reloading finished in 496 ms. May 15 00:06:33.708658 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 00:06:33.718741 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 00:06:33.730778 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 00:06:33.758807 systemd[1]: Starting ensure-sysext.service... May 15 00:06:33.768268 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:06:33.790458 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:06:33.815812 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 00:06:33.816450 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 00:06:33.817703 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 00:06:33.818045 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. May 15 00:06:33.818123 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. May 15 00:06:33.822463 systemd[1]: Reload requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... May 15 00:06:33.822488 systemd[1]: Reloading... May 15 00:06:33.833650 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:06:33.835506 systemd-tmpfiles[1247]: Skipping /boot May 15 00:06:33.881051 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:06:33.881079 systemd-tmpfiles[1247]: Skipping /boot May 15 00:06:33.882461 systemd-udevd[1248]: Using default interface naming scheme 'v255'. May 15 00:06:33.960180 zram_generator::config[1274]: No configuration found. May 15 00:06:34.227178 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 00:06:34.253213 kernel: ACPI: button: Power Button [PWRF] May 15 00:06:34.293183 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 15 00:06:34.314210 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 May 15 00:06:34.318117 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:06:34.338183 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 15 00:06:34.423206 kernel: ACPI: button: Sleep Button [SLPF] May 15 00:06:34.467177 kernel: EDAC MC: Ver: 3.0.0 May 15 00:06:34.521179 kernel: mousedev: PS/2 mouse device common for all mice May 15 00:06:34.552350 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 00:06:34.552888 systemd[1]: Reloading finished in 729 ms. May 15 00:06:34.560189 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1299) May 15 00:06:34.570183 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:06:34.609504 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:06:34.640747 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 00:06:34.664101 systemd[1]: Finished ensure-sysext.service. May 15 00:06:34.704135 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. May 15 00:06:34.715531 systemd[1]: Reached target tpm2.target - Trusted Platform Module. May 15 00:06:34.725364 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:06:34.731425 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 00:06:34.747576 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 00:06:34.759557 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:06:34.769386 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 00:06:34.785553 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:06:34.810227 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:06:34.817510 lvm[1358]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:06:34.828394 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:06:34.848502 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:06:34.858845 augenrules[1378]: No rules May 15 00:06:34.864678 systemd[1]: Starting setup-oem.service - Setup OEM... May 15 00:06:34.873483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:06:34.878927 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 00:06:34.890304 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:06:34.897280 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 00:06:34.916982 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:06:34.934389 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:06:34.944821 systemd[1]: Reached target time-set.target - System Time Set. May 15 00:06:34.961735 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 00:06:34.988435 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:06:34.998303 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:06:35.003656 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:06:35.004011 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 00:06:35.014892 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 00:06:35.015629 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 00:06:35.016044 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:06:35.016539 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:06:35.017268 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:06:35.017533 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:06:35.018199 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:06:35.018670 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:06:35.019248 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:06:35.019640 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:06:35.024035 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 00:06:35.024892 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 00:06:35.040074 systemd[1]: Finished setup-oem.service - Setup OEM. May 15 00:06:35.045533 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:06:35.051480 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 00:06:35.057347 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... May 15 00:06:35.057988 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:06:35.058095 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:06:35.061009 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 00:06:35.069194 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:06:35.071485 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 00:06:35.071558 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:06:35.074003 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 00:06:35.126627 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 00:06:35.133409 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 00:06:35.148433 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. May 15 00:06:35.173910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:06:35.185562 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 00:06:35.291697 systemd-networkd[1387]: lo: Link UP May 15 00:06:35.291716 systemd-networkd[1387]: lo: Gained carrier May 15 00:06:35.294462 systemd-networkd[1387]: Enumeration completed May 15 00:06:35.294615 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:06:35.295311 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:06:35.295329 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:06:35.295943 systemd-networkd[1387]: eth0: Link UP May 15 00:06:35.295959 systemd-networkd[1387]: eth0: Gained carrier May 15 00:06:35.295985 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:06:35.301443 systemd-resolved[1388]: Positive Trust Anchors: May 15 00:06:35.301462 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:06:35.301525 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:06:35.308975 systemd-resolved[1388]: Defaulting to hostname 'linux'. May 15 00:06:35.310225 systemd-networkd[1387]: eth0: Overlong DHCP hostname received, shortened from 'ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826.c.flatcar-212911.internal' to 'ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826' May 15 00:06:35.310250 systemd-networkd[1387]: eth0: DHCPv4 address 10.128.0.108/32, gateway 10.128.0.1 acquired from 169.254.169.254 May 15 00:06:35.313417 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 00:06:35.330437 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 00:06:35.342495 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:06:35.353600 systemd[1]: Reached target network.target - Network. May 15 00:06:35.363307 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:06:35.375307 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:06:35.385463 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 00:06:35.396412 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 00:06:35.408525 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 00:06:35.418434 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 00:06:35.430318 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 00:06:35.441286 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 00:06:35.441346 systemd[1]: Reached target paths.target - Path Units. May 15 00:06:35.449266 systemd[1]: Reached target timers.target - Timer Units. May 15 00:06:35.459258 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 00:06:35.471071 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 00:06:35.481899 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 00:06:35.493582 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 00:06:35.505334 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 00:06:35.527089 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 00:06:35.537953 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 00:06:35.550720 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 00:06:35.562621 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 00:06:35.573190 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:06:35.583287 systemd[1]: Reached target basic.target - Basic System. May 15 00:06:35.591406 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 00:06:35.591462 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 00:06:35.603349 systemd[1]: Starting containerd.service - containerd container runtime... May 15 00:06:35.622388 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 00:06:35.651764 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 00:06:35.667984 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 00:06:35.686445 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 00:06:35.696311 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 00:06:35.700168 jq[1443]: false May 15 00:06:35.702358 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 00:06:35.703465 coreos-metadata[1439]: May 15 00:06:35.703 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 May 15 00:06:35.707160 coreos-metadata[1439]: May 15 00:06:35.705 INFO Fetch successful May 15 00:06:35.707160 coreos-metadata[1439]: May 15 00:06:35.705 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 May 15 00:06:35.707160 coreos-metadata[1439]: May 15 00:06:35.705 INFO Fetch successful May 15 00:06:35.707160 coreos-metadata[1439]: May 15 00:06:35.706 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 May 15 00:06:35.707160 coreos-metadata[1439]: May 15 00:06:35.706 INFO Fetch successful May 15 00:06:35.707160 coreos-metadata[1439]: May 15 00:06:35.706 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 May 15 00:06:35.707489 coreos-metadata[1439]: May 15 00:06:35.707 INFO Fetch successful May 15 00:06:35.718212 systemd[1]: Started ntpd.service - Network Time Service. May 15 00:06:35.734719 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 00:06:35.751419 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 00:06:35.760800 dbus-daemon[1440]: [system] SELinux support is enabled May 15 00:06:35.771539 dbus-daemon[1440]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1387 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 15 00:06:35.774385 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 00:06:35.781203 extend-filesystems[1444]: Found loop4 May 15 00:06:35.781203 extend-filesystems[1444]: Found loop5 May 15 00:06:35.781203 extend-filesystems[1444]: Found loop6 May 15 00:06:35.781203 extend-filesystems[1444]: Found loop7 May 15 00:06:35.781203 extend-filesystems[1444]: Found sda May 15 00:06:35.781203 extend-filesystems[1444]: Found sda1 May 15 00:06:35.781203 extend-filesystems[1444]: Found sda2 May 15 00:06:35.781203 extend-filesystems[1444]: Found sda3 May 15 00:06:35.781203 extend-filesystems[1444]: Found usr May 15 00:06:35.781203 extend-filesystems[1444]: Found sda4 May 15 00:06:35.781203 extend-filesystems[1444]: Found sda6 May 15 00:06:35.781203 extend-filesystems[1444]: Found sda7 May 15 00:06:35.781203 extend-filesystems[1444]: Found sda9 May 15 00:06:35.781203 extend-filesystems[1444]: Checking size of /dev/sda9 May 15 00:06:35.967447 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks May 15 00:06:35.967520 kernel: EXT4-fs (sda9): resized filesystem to 2538491 May 15 00:06:35.967574 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1299) May 15 00:06:35.794429 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 00:06:35.824280 ntpd[1446]: ntpd 4.2.8p17@1.4004-o Wed May 14 21:38:42 UTC 2025 (1): Starting May 15 00:06:35.968328 extend-filesystems[1444]: Resized partition /dev/sda9 May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: ntpd 4.2.8p17@1.4004-o Wed May 14 21:38:42 UTC 2025 (1): Starting May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: ---------------------------------------------------- May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: ntp-4 is maintained by Network Time Foundation, May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: corporation. Support and training for ntp-4 are May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: available at https://www.nwtime.org/support May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: ---------------------------------------------------- May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: proto: precision = 0.090 usec (-23) May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: basedate set to 2025-05-02 May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: gps base set to 2025-05-04 (week 2365) May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: Listen and drop on 0 v6wildcard [::]:123 May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: Listen normally on 2 lo 127.0.0.1:123 May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: Listen normally on 3 eth0 10.128.0.108:123 May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: Listen normally on 4 lo [::1]:123 May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: bind(21) AF_INET6 fe80::4001:aff:fe80:6c%2#123 flags 0x11 failed: Cannot assign requested address May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:6c%2#123 May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: failed to init interface for address fe80::4001:aff:fe80:6c%2 May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: Listening on routing socket on fd #21 for interface updates May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 15 00:06:35.991495 ntpd[1446]: 15 May 00:06:35 ntpd[1446]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 15 00:06:35.820064 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). May 15 00:06:35.824312 ntpd[1446]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 15 00:06:35.993744 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) May 15 00:06:35.993744 extend-filesystems[1468]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 15 00:06:35.993744 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 2 May 15 00:06:35.993744 extend-filesystems[1468]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. May 15 00:06:35.821633 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 00:06:36.050435 update_engine[1469]: I20250515 00:06:35.924643 1469 main.cc:92] Flatcar Update Engine starting May 15 00:06:36.050435 update_engine[1469]: I20250515 00:06:35.946558 1469 update_check_scheduler.cc:74] Next update check in 6m56s May 15 00:06:35.824326 ntpd[1446]: ---------------------------------------------------- May 15 00:06:36.051073 extend-filesystems[1444]: Resized filesystem in /dev/sda9 May 15 00:06:35.830031 systemd[1]: Starting update-engine.service - Update Engine... May 15 00:06:36.062360 jq[1471]: true May 15 00:06:35.824339 ntpd[1446]: ntp-4 is maintained by Network Time Foundation, May 15 00:06:35.868116 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 00:06:35.824352 ntpd[1446]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 15 00:06:35.903799 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 00:06:35.824366 ntpd[1446]: corporation. Support and training for ntp-4 are May 15 00:06:35.940525 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 00:06:35.824380 ntpd[1446]: available at https://www.nwtime.org/support May 15 00:06:35.942258 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 00:06:36.063841 jq[1477]: true May 15 00:06:35.824393 ntpd[1446]: ---------------------------------------------------- May 15 00:06:35.942763 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 00:06:35.837562 ntpd[1446]: proto: precision = 0.090 usec (-23) May 15 00:06:35.943076 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 00:06:35.840689 ntpd[1446]: basedate set to 2025-05-02 May 15 00:06:35.960634 systemd[1]: motdgen.service: Deactivated successfully. May 15 00:06:35.840724 ntpd[1446]: gps base set to 2025-05-04 (week 2365) May 15 00:06:35.960981 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 00:06:35.851748 ntpd[1446]: Listen and drop on 0 v6wildcard [::]:123 May 15 00:06:35.984833 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 00:06:35.851823 ntpd[1446]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 15 00:06:35.986251 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 00:06:35.852085 ntpd[1446]: Listen normally on 2 lo 127.0.0.1:123 May 15 00:06:36.017061 systemd-logind[1458]: Watching system buttons on /dev/input/event1 (Power Button) May 15 00:06:35.852174 ntpd[1446]: Listen normally on 3 eth0 10.128.0.108:123 May 15 00:06:36.017094 systemd-logind[1458]: Watching system buttons on /dev/input/event3 (Sleep Button) May 15 00:06:35.852243 ntpd[1446]: Listen normally on 4 lo [::1]:123 May 15 00:06:36.017126 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 00:06:35.852310 ntpd[1446]: bind(21) AF_INET6 fe80::4001:aff:fe80:6c%2#123 flags 0x11 failed: Cannot assign requested address May 15 00:06:36.033290 systemd-logind[1458]: New seat seat0. May 15 00:06:35.852343 ntpd[1446]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:6c%2#123 May 15 00:06:36.043374 systemd[1]: Started systemd-logind.service - User Login Management. May 15 00:06:35.852365 ntpd[1446]: failed to init interface for address fe80::4001:aff:fe80:6c%2 May 15 00:06:36.058717 (ntainerd)[1478]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 00:06:35.852415 ntpd[1446]: Listening on routing socket on fd #21 for interface updates May 15 00:06:35.855322 ntpd[1446]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 15 00:06:35.855386 ntpd[1446]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 15 00:06:36.087744 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 00:06:36.135623 dbus-daemon[1440]: [system] Successfully activated service 'org.freedesktop.systemd1' May 15 00:06:36.197582 systemd[1]: Started update-engine.service - Update Engine. May 15 00:06:36.210894 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 00:06:36.211801 tar[1475]: linux-amd64/helm May 15 00:06:36.222251 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 00:06:36.222542 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 00:06:36.224611 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 00:06:36.249178 bash[1508]: Updated "/home/core/.ssh/authorized_keys" May 15 00:06:36.249089 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 15 00:06:36.259304 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 00:06:36.259562 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 00:06:36.279660 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 00:06:36.305744 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 00:06:36.329601 systemd[1]: Starting sshkeys.service... May 15 00:06:36.383324 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 00:06:36.401645 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 00:06:36.568983 coreos-metadata[1513]: May 15 00:06:36.568 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 May 15 00:06:36.573181 coreos-metadata[1513]: May 15 00:06:36.572 INFO Fetch failed with 404: resource not found May 15 00:06:36.573181 coreos-metadata[1513]: May 15 00:06:36.572 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 May 15 00:06:36.573181 coreos-metadata[1513]: May 15 00:06:36.572 INFO Fetch successful May 15 00:06:36.573181 coreos-metadata[1513]: May 15 00:06:36.572 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 May 15 00:06:36.574163 coreos-metadata[1513]: May 15 00:06:36.573 INFO Fetch failed with 404: resource not found May 15 00:06:36.574163 coreos-metadata[1513]: May 15 00:06:36.573 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 May 15 00:06:36.578664 coreos-metadata[1513]: May 15 00:06:36.576 INFO Fetch failed with 404: resource not found May 15 00:06:36.578664 coreos-metadata[1513]: May 15 00:06:36.576 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 May 15 00:06:36.578664 coreos-metadata[1513]: May 15 00:06:36.577 INFO Fetch successful May 15 00:06:36.581451 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 15 00:06:36.581874 unknown[1513]: wrote ssh authorized keys file for user: core May 15 00:06:36.583984 dbus-daemon[1440]: [system] Successfully activated service 'org.freedesktop.hostname1' May 15 00:06:36.587271 dbus-daemon[1440]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1509 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 15 00:06:36.624652 systemd[1]: Starting polkit.service - Authorization Manager... May 15 00:06:36.656454 update-ssh-keys[1522]: Updated "/home/core/.ssh/authorized_keys" May 15 00:06:36.658244 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 00:06:36.674250 systemd[1]: Finished sshkeys.service. May 15 00:06:36.741002 polkitd[1524]: Started polkitd version 121 May 15 00:06:36.766607 polkitd[1524]: Loading rules from directory /etc/polkit-1/rules.d May 15 00:06:36.766997 polkitd[1524]: Loading rules from directory /usr/share/polkit-1/rules.d May 15 00:06:36.771866 polkitd[1524]: Finished loading, compiling and executing 2 rules May 15 00:06:36.773670 dbus-daemon[1440]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 15 00:06:36.774968 systemd[1]: Started polkit.service - Authorization Manager. May 15 00:06:36.778789 polkitd[1524]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 15 00:06:36.789373 systemd-networkd[1387]: eth0: Gained IPv6LL May 15 00:06:36.798267 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 00:06:36.810082 systemd[1]: Reached target network-online.target - Network is Online. May 15 00:06:36.828462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:06:36.846232 locksmithd[1510]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 00:06:36.850960 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 00:06:36.863735 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 00:06:36.868528 systemd[1]: Starting oem-gce.service - GCE Linux Agent... May 15 00:06:36.896559 systemd-hostnamed[1509]: Hostname set to (transient) May 15 00:06:36.897998 systemd-resolved[1388]: System hostname changed to 'ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826'. May 15 00:06:36.930179 init.sh[1540]: + '[' -e /etc/default/instance_configs.cfg.template ']' May 15 00:06:36.930179 init.sh[1540]: + echo -e '[InstanceSetup]\nset_host_keys = false' May 15 00:06:36.930179 init.sh[1540]: + /usr/bin/google_instance_setup May 15 00:06:36.942961 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 00:06:36.965734 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 00:06:36.982592 systemd[1]: Started sshd@0-10.128.0.108:22-147.75.109.163:40200.service - OpenSSH per-connection server daemon (147.75.109.163:40200). May 15 00:06:36.988026 containerd[1478]: time="2025-05-15T00:06:36.987921801Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 00:06:36.998073 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 00:06:37.039230 systemd[1]: issuegen.service: Deactivated successfully. May 15 00:06:37.039600 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 00:06:37.058574 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 00:06:37.133739 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 00:06:37.156241 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 00:06:37.173556 containerd[1478]: time="2025-05-15T00:06:37.173494414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 00:06:37.174694 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 00:06:37.180523 containerd[1478]: time="2025-05-15T00:06:37.179933309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 00:06:37.180523 containerd[1478]: time="2025-05-15T00:06:37.179985750Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 00:06:37.180523 containerd[1478]: time="2025-05-15T00:06:37.180013901Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 00:06:37.180523 containerd[1478]: time="2025-05-15T00:06:37.180244868Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 00:06:37.180523 containerd[1478]: time="2025-05-15T00:06:37.180283104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 00:06:37.180523 containerd[1478]: time="2025-05-15T00:06:37.180377498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:06:37.180523 containerd[1478]: time="2025-05-15T00:06:37.180399285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 00:06:37.180948 containerd[1478]: time="2025-05-15T00:06:37.180883926Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:06:37.180948 containerd[1478]: time="2025-05-15T00:06:37.180921549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 00:06:37.181031 containerd[1478]: time="2025-05-15T00:06:37.180946315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:06:37.181031 containerd[1478]: time="2025-05-15T00:06:37.180963275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 00:06:37.181175 containerd[1478]: time="2025-05-15T00:06:37.181109515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 00:06:37.182547 containerd[1478]: time="2025-05-15T00:06:37.181572812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 00:06:37.182547 containerd[1478]: time="2025-05-15T00:06:37.181889700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:06:37.182547 containerd[1478]: time="2025-05-15T00:06:37.181915576Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 00:06:37.184610 containerd[1478]: time="2025-05-15T00:06:37.182822258Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 00:06:37.184610 containerd[1478]: time="2025-05-15T00:06:37.182906924Z" level=info msg="metadata content store policy set" policy=shared May 15 00:06:37.185595 systemd[1]: Reached target getty.target - Login Prompts. May 15 00:06:37.195250 containerd[1478]: time="2025-05-15T00:06:37.195211061Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 00:06:37.195369 containerd[1478]: time="2025-05-15T00:06:37.195285255Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 00:06:37.195369 containerd[1478]: time="2025-05-15T00:06:37.195315120Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 00:06:37.195369 containerd[1478]: time="2025-05-15T00:06:37.195344787Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 00:06:37.195498 containerd[1478]: time="2025-05-15T00:06:37.195367693Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 00:06:37.195608 containerd[1478]: time="2025-05-15T00:06:37.195571493Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 00:06:37.198269 containerd[1478]: time="2025-05-15T00:06:37.195963104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 00:06:37.198269 containerd[1478]: time="2025-05-15T00:06:37.196176710Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 00:06:37.198269 containerd[1478]: time="2025-05-15T00:06:37.196207065Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 00:06:37.198269 containerd[1478]: time="2025-05-15T00:06:37.196246418Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 00:06:37.198269 containerd[1478]: time="2025-05-15T00:06:37.196271076Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 00:06:37.198269 containerd[1478]: time="2025-05-15T00:06:37.196298387Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 00:06:37.198269 containerd[1478]: time="2025-05-15T00:06:37.196323144Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 00:06:37.198269 containerd[1478]: time="2025-05-15T00:06:37.196348745Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 00:06:37.198269 containerd[1478]: time="2025-05-15T00:06:37.196372638Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 00:06:37.198269 containerd[1478]: time="2025-05-15T00:06:37.196396406Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 00:06:37.198269 containerd[1478]: time="2025-05-15T00:06:37.196418748Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 00:06:37.198269 containerd[1478]: time="2025-05-15T00:06:37.196439510Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 00:06:37.198269 containerd[1478]: time="2025-05-15T00:06:37.196476250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 00:06:37.198269 containerd[1478]: time="2025-05-15T00:06:37.196500461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 00:06:37.198918 containerd[1478]: time="2025-05-15T00:06:37.196531827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 00:06:37.198918 containerd[1478]: time="2025-05-15T00:06:37.196556767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 00:06:37.198918 containerd[1478]: time="2025-05-15T00:06:37.196578301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 00:06:37.198918 containerd[1478]: time="2025-05-15T00:06:37.196601436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 00:06:37.198918 containerd[1478]: time="2025-05-15T00:06:37.196622765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 00:06:37.198918 containerd[1478]: time="2025-05-15T00:06:37.196646046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 00:06:37.198918 containerd[1478]: time="2025-05-15T00:06:37.196669345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 00:06:37.198918 containerd[1478]: time="2025-05-15T00:06:37.196697153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 00:06:37.198918 containerd[1478]: time="2025-05-15T00:06:37.196717483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 00:06:37.198918 containerd[1478]: time="2025-05-15T00:06:37.196738196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 00:06:37.198918 containerd[1478]: time="2025-05-15T00:06:37.196769894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 00:06:37.198918 containerd[1478]: time="2025-05-15T00:06:37.196805360Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 00:06:37.198918 containerd[1478]: time="2025-05-15T00:06:37.196861708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 00:06:37.198918 containerd[1478]: time="2025-05-15T00:06:37.196888078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 00:06:37.198918 containerd[1478]: time="2025-05-15T00:06:37.196909205Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 00:06:37.200191 containerd[1478]: time="2025-05-15T00:06:37.197002796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 00:06:37.200191 containerd[1478]: time="2025-05-15T00:06:37.197039874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 00:06:37.200191 containerd[1478]: time="2025-05-15T00:06:37.197059997Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 00:06:37.200191 containerd[1478]: time="2025-05-15T00:06:37.197085105Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 00:06:37.200191 containerd[1478]: time="2025-05-15T00:06:37.197106247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 00:06:37.200191 containerd[1478]: time="2025-05-15T00:06:37.197136017Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 00:06:37.200191 containerd[1478]: time="2025-05-15T00:06:37.197181982Z" level=info msg="NRI interface is disabled by configuration." May 15 00:06:37.200191 containerd[1478]: time="2025-05-15T00:06:37.197204785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 00:06:37.201224 containerd[1478]: time="2025-05-15T00:06:37.200432134Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 00:06:37.201224 containerd[1478]: time="2025-05-15T00:06:37.200522345Z" level=info msg="Connect containerd service" May 15 00:06:37.201224 containerd[1478]: time="2025-05-15T00:06:37.200592147Z" level=info msg="using legacy CRI server" May 15 00:06:37.201224 containerd[1478]: time="2025-05-15T00:06:37.200617916Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 00:06:37.201224 containerd[1478]: time="2025-05-15T00:06:37.200810003Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 00:06:37.202569 containerd[1478]: time="2025-05-15T00:06:37.201763538Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:06:37.203550 containerd[1478]: time="2025-05-15T00:06:37.203398981Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 00:06:37.203685 containerd[1478]: time="2025-05-15T00:06:37.203653266Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 00:06:37.203803 containerd[1478]: time="2025-05-15T00:06:37.203767373Z" level=info msg="Start subscribing containerd event" May 15 00:06:37.204974 containerd[1478]: time="2025-05-15T00:06:37.204912646Z" level=info msg="Start recovering state" May 15 00:06:37.205441 containerd[1478]: time="2025-05-15T00:06:37.205276360Z" level=info msg="Start event monitor" May 15 00:06:37.205441 containerd[1478]: time="2025-05-15T00:06:37.205312613Z" level=info msg="Start snapshots syncer" May 15 00:06:37.205441 containerd[1478]: time="2025-05-15T00:06:37.205329259Z" level=info msg="Start cni network conf syncer for default" May 15 00:06:37.205441 containerd[1478]: time="2025-05-15T00:06:37.205342514Z" level=info msg="Start streaming server" May 15 00:06:37.205871 systemd[1]: Started containerd.service - containerd container runtime. May 15 00:06:37.207215 containerd[1478]: time="2025-05-15T00:06:37.205943601Z" level=info msg="containerd successfully booted in 0.224074s" May 15 00:06:37.515294 sshd[1560]: Accepted publickey for core from 147.75.109.163 port 40200 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:06:37.520755 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:37.543981 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 00:06:37.564764 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 00:06:37.602199 systemd-logind[1458]: New session 1 of user core. May 15 00:06:37.621216 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 00:06:37.643359 tar[1475]: linux-amd64/LICENSE May 15 00:06:37.643359 tar[1475]: linux-amd64/README.md May 15 00:06:37.644595 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 00:06:37.676510 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 00:06:37.688422 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 00:06:37.694039 systemd-logind[1458]: New session c1 of user core. May 15 00:06:37.897437 instance-setup[1554]: INFO Running google_set_multiqueue. May 15 00:06:37.934348 instance-setup[1554]: INFO Set channels for eth0 to 2. May 15 00:06:37.946352 instance-setup[1554]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. May 15 00:06:37.951682 instance-setup[1554]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 May 15 00:06:37.952458 instance-setup[1554]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. May 15 00:06:37.956385 instance-setup[1554]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 May 15 00:06:37.958459 instance-setup[1554]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. May 15 00:06:37.962615 instance-setup[1554]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 May 15 00:06:37.962677 instance-setup[1554]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. May 15 00:06:37.964424 instance-setup[1554]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 May 15 00:06:37.979889 instance-setup[1554]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type May 15 00:06:37.988455 instance-setup[1554]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type May 15 00:06:37.990758 instance-setup[1554]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus May 15 00:06:37.990948 instance-setup[1554]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus May 15 00:06:38.017423 init.sh[1540]: + /usr/bin/google_metadata_script_runner --script-type startup May 15 00:06:38.032306 systemd[1575]: Queued start job for default target default.target. May 15 00:06:38.039751 systemd[1575]: Created slice app.slice - User Application Slice. May 15 00:06:38.040578 systemd[1575]: Reached target paths.target - Paths. May 15 00:06:38.040875 systemd[1575]: Reached target timers.target - Timers. May 15 00:06:38.046694 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 00:06:38.070123 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 00:06:38.071386 systemd[1575]: Reached target sockets.target - Sockets. May 15 00:06:38.071472 systemd[1575]: Reached target basic.target - Basic System. May 15 00:06:38.071544 systemd[1575]: Reached target default.target - Main User Target. May 15 00:06:38.071595 systemd[1575]: Startup finished in 358ms. May 15 00:06:38.071891 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 00:06:38.087397 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 00:06:38.198340 startup-script[1614]: INFO Starting startup scripts. May 15 00:06:38.204056 startup-script[1614]: INFO No startup scripts found in metadata. May 15 00:06:38.204131 startup-script[1614]: INFO Finished running startup scripts. May 15 00:06:38.227805 init.sh[1540]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM May 15 00:06:38.227805 init.sh[1540]: + daemon_pids=() May 15 00:06:38.227993 init.sh[1540]: + for d in accounts clock_skew network May 15 00:06:38.229944 init.sh[1540]: + daemon_pids+=($!) May 15 00:06:38.229944 init.sh[1540]: + for d in accounts clock_skew network May 15 00:06:38.229944 init.sh[1540]: + daemon_pids+=($!) May 15 00:06:38.229944 init.sh[1540]: + for d in accounts clock_skew network May 15 00:06:38.229944 init.sh[1540]: + daemon_pids+=($!) May 15 00:06:38.229944 init.sh[1540]: + NOTIFY_SOCKET=/run/systemd/notify May 15 00:06:38.229944 init.sh[1540]: + /usr/bin/systemd-notify --ready May 15 00:06:38.230303 init.sh[1621]: + /usr/bin/google_clock_skew_daemon May 15 00:06:38.230724 init.sh[1622]: + /usr/bin/google_network_daemon May 15 00:06:38.232646 init.sh[1620]: + /usr/bin/google_accounts_daemon May 15 00:06:38.257300 systemd[1]: Started oem-gce.service - GCE Linux Agent. May 15 00:06:38.268480 init.sh[1540]: + wait -n 1620 1621 1622 May 15 00:06:38.350564 systemd[1]: Started sshd@1-10.128.0.108:22-147.75.109.163:48754.service - OpenSSH per-connection server daemon (147.75.109.163:48754). May 15 00:06:38.682986 google-networking[1622]: INFO Starting Google Networking daemon. May 15 00:06:38.714450 google-clock-skew[1621]: INFO Starting Google Clock Skew daemon. May 15 00:06:38.718625 sshd[1626]: Accepted publickey for core from 147.75.109.163 port 48754 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:06:38.722118 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:38.729202 google-clock-skew[1621]: INFO Clock drift token has changed: 0. May 15 00:06:38.732498 systemd-logind[1458]: New session 2 of user core. May 15 00:06:38.738465 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 00:06:39.000595 google-clock-skew[1621]: INFO Synced system time with hardware clock. May 15 00:06:39.001999 systemd-resolved[1388]: Clock change detected. Flushing caches. May 15 00:06:39.017753 groupadd[1636]: group added to /etc/group: name=google-sudoers, GID=1000 May 15 00:06:39.022524 groupadd[1636]: group added to /etc/gshadow: name=google-sudoers May 15 00:06:39.054794 ntpd[1446]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:6c%2]:123 May 15 00:06:39.055255 ntpd[1446]: 15 May 00:06:39 ntpd[1446]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:6c%2]:123 May 15 00:06:39.068782 groupadd[1636]: new group: name=google-sudoers, GID=1000 May 15 00:06:39.098410 google-accounts[1620]: INFO Starting Google Accounts daemon. May 15 00:06:39.110370 google-accounts[1620]: WARNING OS Login not installed. May 15 00:06:39.111802 google-accounts[1620]: INFO Creating a new user account for 0. May 15 00:06:39.117859 init.sh[1646]: useradd: invalid user name '0': use --badname to ignore May 15 00:06:39.119653 google-accounts[1620]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. May 15 00:06:39.173174 sshd[1638]: Connection closed by 147.75.109.163 port 48754 May 15 00:06:39.172781 sshd-session[1626]: pam_unix(sshd:session): session closed for user core May 15 00:06:39.180482 systemd[1]: sshd@1-10.128.0.108:22-147.75.109.163:48754.service: Deactivated successfully. May 15 00:06:39.188708 systemd[1]: session-2.scope: Deactivated successfully. May 15 00:06:39.191304 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. May 15 00:06:39.199146 systemd-logind[1458]: Removed session 2. May 15 00:06:39.214895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:06:39.235725 (kubelet)[1657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:06:39.241523 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 00:06:39.259463 systemd[1]: Started sshd@2-10.128.0.108:22-147.75.109.163:48770.service - OpenSSH per-connection server daemon (147.75.109.163:48770). May 15 00:06:39.272013 systemd[1]: Startup finished in 1.047s (kernel) + 10.591s (initrd) + 9.755s (userspace) = 21.393s. May 15 00:06:39.564605 sshd[1659]: Accepted publickey for core from 147.75.109.163 port 48770 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:06:39.566736 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:39.576368 systemd-logind[1458]: New session 3 of user core. May 15 00:06:39.580411 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 00:06:39.783167 sshd[1669]: Connection closed by 147.75.109.163 port 48770 May 15 00:06:39.785339 sshd-session[1659]: pam_unix(sshd:session): session closed for user core May 15 00:06:39.792409 systemd[1]: sshd@2-10.128.0.108:22-147.75.109.163:48770.service: Deactivated successfully. May 15 00:06:39.795023 systemd[1]: session-3.scope: Deactivated successfully. May 15 00:06:39.796338 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. May 15 00:06:39.797911 systemd-logind[1458]: Removed session 3. May 15 00:06:40.205160 kubelet[1657]: E0515 00:06:40.205082 1657 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:06:40.208015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:06:40.208318 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:06:40.208921 systemd[1]: kubelet.service: Consumed 1.217s CPU time, 244M memory peak. May 15 00:06:49.844535 systemd[1]: Started sshd@3-10.128.0.108:22-147.75.109.163:38792.service - OpenSSH per-connection server daemon (147.75.109.163:38792). May 15 00:06:50.139350 sshd[1678]: Accepted publickey for core from 147.75.109.163 port 38792 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:06:50.140939 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:50.148079 systemd-logind[1458]: New session 4 of user core. May 15 00:06:50.159329 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 00:06:50.313728 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 00:06:50.326388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:06:50.356085 sshd[1680]: Connection closed by 147.75.109.163 port 38792 May 15 00:06:50.358323 sshd-session[1678]: pam_unix(sshd:session): session closed for user core May 15 00:06:50.363509 systemd[1]: sshd@3-10.128.0.108:22-147.75.109.163:38792.service: Deactivated successfully. May 15 00:06:50.365831 systemd[1]: session-4.scope: Deactivated successfully. May 15 00:06:50.366927 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. May 15 00:06:50.368781 systemd-logind[1458]: Removed session 4. May 15 00:06:50.416486 systemd[1]: Started sshd@4-10.128.0.108:22-147.75.109.163:38800.service - OpenSSH per-connection server daemon (147.75.109.163:38800). May 15 00:06:50.611281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:06:50.628762 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:06:50.693941 kubelet[1695]: E0515 00:06:50.693765 1695 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:06:50.699021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:06:50.699331 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:06:50.699989 systemd[1]: kubelet.service: Consumed 187ms CPU time, 98.4M memory peak. May 15 00:06:50.709725 sshd[1689]: Accepted publickey for core from 147.75.109.163 port 38800 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:06:50.711582 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:50.717999 systemd-logind[1458]: New session 5 of user core. May 15 00:06:50.729370 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 00:06:50.918080 sshd[1704]: Connection closed by 147.75.109.163 port 38800 May 15 00:06:50.918996 sshd-session[1689]: pam_unix(sshd:session): session closed for user core May 15 00:06:50.924311 systemd[1]: sshd@4-10.128.0.108:22-147.75.109.163:38800.service: Deactivated successfully. May 15 00:06:50.926670 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:06:50.927750 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. May 15 00:06:50.929174 systemd-logind[1458]: Removed session 5. May 15 00:06:50.980451 systemd[1]: Started sshd@5-10.128.0.108:22-147.75.109.163:38816.service - OpenSSH per-connection server daemon (147.75.109.163:38816). May 15 00:06:51.267642 sshd[1710]: Accepted publickey for core from 147.75.109.163 port 38816 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:06:51.269456 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:51.276403 systemd-logind[1458]: New session 6 of user core. May 15 00:06:51.285305 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 00:06:51.480699 sshd[1712]: Connection closed by 147.75.109.163 port 38816 May 15 00:06:51.481642 sshd-session[1710]: pam_unix(sshd:session): session closed for user core May 15 00:06:51.486714 systemd[1]: sshd@5-10.128.0.108:22-147.75.109.163:38816.service: Deactivated successfully. May 15 00:06:51.489019 systemd[1]: session-6.scope: Deactivated successfully. May 15 00:06:51.490016 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. May 15 00:06:51.491680 systemd-logind[1458]: Removed session 6. May 15 00:06:51.539463 systemd[1]: Started sshd@6-10.128.0.108:22-147.75.109.163:38828.service - OpenSSH per-connection server daemon (147.75.109.163:38828). May 15 00:06:51.831586 sshd[1718]: Accepted publickey for core from 147.75.109.163 port 38828 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:06:51.833436 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:51.839315 systemd-logind[1458]: New session 7 of user core. May 15 00:06:51.846267 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 00:06:52.027520 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 00:06:52.028081 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:06:52.043112 sudo[1721]: pam_unix(sudo:session): session closed for user root May 15 00:06:52.086202 sshd[1720]: Connection closed by 147.75.109.163 port 38828 May 15 00:06:52.087771 sshd-session[1718]: pam_unix(sshd:session): session closed for user core May 15 00:06:52.093396 systemd[1]: sshd@6-10.128.0.108:22-147.75.109.163:38828.service: Deactivated successfully. May 15 00:06:52.095758 systemd[1]: session-7.scope: Deactivated successfully. May 15 00:06:52.096964 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. May 15 00:06:52.098965 systemd-logind[1458]: Removed session 7. May 15 00:06:52.143471 systemd[1]: Started sshd@7-10.128.0.108:22-147.75.109.163:38840.service - OpenSSH per-connection server daemon (147.75.109.163:38840). May 15 00:06:52.432598 sshd[1727]: Accepted publickey for core from 147.75.109.163 port 38840 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:06:52.434462 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:52.440608 systemd-logind[1458]: New session 8 of user core. May 15 00:06:52.447291 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 00:06:52.612385 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 00:06:52.612908 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:06:52.618316 sudo[1731]: pam_unix(sudo:session): session closed for user root May 15 00:06:52.633024 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 00:06:52.633604 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:06:52.654150 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 00:06:52.693344 augenrules[1753]: No rules May 15 00:06:52.693694 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:06:52.694023 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 00:06:52.695879 sudo[1730]: pam_unix(sudo:session): session closed for user root May 15 00:06:52.739024 sshd[1729]: Connection closed by 147.75.109.163 port 38840 May 15 00:06:52.739894 sshd-session[1727]: pam_unix(sshd:session): session closed for user core May 15 00:06:52.745535 systemd[1]: sshd@7-10.128.0.108:22-147.75.109.163:38840.service: Deactivated successfully. May 15 00:06:52.747932 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:06:52.749143 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. May 15 00:06:52.750787 systemd-logind[1458]: Removed session 8. May 15 00:06:52.796494 systemd[1]: Started sshd@8-10.128.0.108:22-147.75.109.163:38848.service - OpenSSH per-connection server daemon (147.75.109.163:38848). May 15 00:06:53.085692 sshd[1762]: Accepted publickey for core from 147.75.109.163 port 38848 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:06:53.087820 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:06:53.094031 systemd-logind[1458]: New session 9 of user core. May 15 00:06:53.101301 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 00:06:53.264229 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 00:06:53.264734 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:06:53.736467 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 00:06:53.748751 (dockerd)[1783]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 00:06:54.194975 dockerd[1783]: time="2025-05-15T00:06:54.194798302Z" level=info msg="Starting up" May 15 00:06:54.331243 dockerd[1783]: time="2025-05-15T00:06:54.331169443Z" level=info msg="Loading containers: start." May 15 00:06:54.542088 kernel: Initializing XFRM netlink socket May 15 00:06:54.659661 systemd-networkd[1387]: docker0: Link UP May 15 00:06:54.699972 dockerd[1783]: time="2025-05-15T00:06:54.699907843Z" level=info msg="Loading containers: done." May 15 00:06:54.721406 dockerd[1783]: time="2025-05-15T00:06:54.721341331Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 00:06:54.721636 dockerd[1783]: time="2025-05-15T00:06:54.721475379Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 15 00:06:54.721703 dockerd[1783]: time="2025-05-15T00:06:54.721639749Z" level=info msg="Daemon has completed initialization" May 15 00:06:54.761223 dockerd[1783]: time="2025-05-15T00:06:54.761155552Z" level=info msg="API listen on /run/docker.sock" May 15 00:06:54.761391 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 00:06:55.801623 containerd[1478]: time="2025-05-15T00:06:55.800879471Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 15 00:06:56.337327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4173867684.mount: Deactivated successfully. May 15 00:06:56.344886 systemd[1]: Started sshd@9-10.128.0.108:22-167.94.146.57:35330.service - OpenSSH per-connection server daemon (167.94.146.57:35330). May 15 00:06:58.230652 containerd[1478]: time="2025-05-15T00:06:58.230575472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:58.232280 containerd[1478]: time="2025-05-15T00:06:58.232211011Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32681501" May 15 00:06:58.233789 containerd[1478]: time="2025-05-15T00:06:58.233716783Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:58.237561 containerd[1478]: time="2025-05-15T00:06:58.237491862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:58.239620 containerd[1478]: time="2025-05-15T00:06:58.238964450Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.438027614s" May 15 00:06:58.239620 containerd[1478]: time="2025-05-15T00:06:58.239012665Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 15 00:06:58.271758 containerd[1478]: time="2025-05-15T00:06:58.271714000Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 15 00:06:59.898964 containerd[1478]: time="2025-05-15T00:06:59.898892187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:59.906474 containerd[1478]: time="2025-05-15T00:06:59.906166301Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29619468" May 15 00:06:59.906770 containerd[1478]: time="2025-05-15T00:06:59.906735567Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:59.910707 containerd[1478]: time="2025-05-15T00:06:59.910653582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:06:59.912819 containerd[1478]: time="2025-05-15T00:06:59.912766176Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.640927628s" May 15 00:06:59.912819 containerd[1478]: time="2025-05-15T00:06:59.912821994Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 15 00:06:59.944860 containerd[1478]: time="2025-05-15T00:06:59.944793339Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 15 00:07:00.950364 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 00:07:00.959310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:07:01.235619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:07:01.241766 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:07:01.249735 containerd[1478]: time="2025-05-15T00:07:01.247961336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:01.251637 containerd[1478]: time="2025-05-15T00:07:01.251539417Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17905598" May 15 00:07:01.253332 containerd[1478]: time="2025-05-15T00:07:01.253297549Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:01.262254 containerd[1478]: time="2025-05-15T00:07:01.262211685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:01.265066 containerd[1478]: time="2025-05-15T00:07:01.263547601Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.318706168s" May 15 00:07:01.265066 containerd[1478]: time="2025-05-15T00:07:01.264440991Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 15 00:07:01.310154 containerd[1478]: time="2025-05-15T00:07:01.310104104Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 15 00:07:01.320621 kubelet[2059]: E0515 00:07:01.320568 2059 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:07:01.324261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:07:01.324505 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:07:01.325006 systemd[1]: kubelet.service: Consumed 206ms CPU time, 94.1M memory peak. May 15 00:07:02.366215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3587530916.mount: Deactivated successfully. May 15 00:07:02.938635 containerd[1478]: time="2025-05-15T00:07:02.938552305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:02.940545 containerd[1478]: time="2025-05-15T00:07:02.940471461Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29187712" May 15 00:07:02.942491 containerd[1478]: time="2025-05-15T00:07:02.942422153Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:02.946150 containerd[1478]: time="2025-05-15T00:07:02.946074402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:02.947261 containerd[1478]: time="2025-05-15T00:07:02.947210044Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.637051417s" May 15 00:07:02.947384 containerd[1478]: time="2025-05-15T00:07:02.947265811Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 15 00:07:02.978419 containerd[1478]: time="2025-05-15T00:07:02.978373846Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 00:07:03.471353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1290496012.mount: Deactivated successfully. May 15 00:07:04.586000 containerd[1478]: time="2025-05-15T00:07:04.585923986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:04.587686 containerd[1478]: time="2025-05-15T00:07:04.587611550Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" May 15 00:07:04.589190 containerd[1478]: time="2025-05-15T00:07:04.589119092Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:04.596087 containerd[1478]: time="2025-05-15T00:07:04.594278752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:04.601224 containerd[1478]: time="2025-05-15T00:07:04.601171098Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.622683982s" May 15 00:07:04.601438 containerd[1478]: time="2025-05-15T00:07:04.601413292Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 00:07:04.634279 containerd[1478]: time="2025-05-15T00:07:04.634224120Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 15 00:07:05.028795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3279094067.mount: Deactivated successfully. May 15 00:07:05.035478 containerd[1478]: time="2025-05-15T00:07:05.035384422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:05.036688 containerd[1478]: time="2025-05-15T00:07:05.036629724Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" May 15 00:07:05.038062 containerd[1478]: time="2025-05-15T00:07:05.037967880Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:05.042826 containerd[1478]: time="2025-05-15T00:07:05.042758817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:05.044604 containerd[1478]: time="2025-05-15T00:07:05.044104159Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 409.820622ms" May 15 00:07:05.044604 containerd[1478]: time="2025-05-15T00:07:05.044147300Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 15 00:07:05.078435 containerd[1478]: time="2025-05-15T00:07:05.078372170Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 15 00:07:05.495481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2558904197.mount: Deactivated successfully. May 15 00:07:07.162372 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 15 00:07:07.816994 containerd[1478]: time="2025-05-15T00:07:07.816918346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:07.818723 containerd[1478]: time="2025-05-15T00:07:07.818653619Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" May 15 00:07:07.820126 containerd[1478]: time="2025-05-15T00:07:07.820023425Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:07.823863 containerd[1478]: time="2025-05-15T00:07:07.823795174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:07.825649 containerd[1478]: time="2025-05-15T00:07:07.825431228Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.747007974s" May 15 00:07:07.825649 containerd[1478]: time="2025-05-15T00:07:07.825486044Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 15 00:07:11.126421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:07:11.126741 systemd[1]: kubelet.service: Consumed 206ms CPU time, 94.1M memory peak. May 15 00:07:11.140482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:07:11.181381 systemd[1]: Reload requested from client PID 2253 ('systemctl') (unit session-9.scope)... May 15 00:07:11.181404 systemd[1]: Reloading... May 15 00:07:11.363091 sshd[1982]: Connection closed by 167.94.146.57 port 35330 [preauth] May 15 00:07:11.372936 zram_generator::config[2300]: No configuration found. May 15 00:07:11.526650 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:07:11.673250 systemd[1]: Reloading finished in 491 ms. May 15 00:07:11.694315 systemd[1]: sshd@9-10.128.0.108:22-167.94.146.57:35330.service: Deactivated successfully. May 15 00:07:11.744266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:07:11.760955 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:07:11.769263 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:07:11.771301 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:07:11.771660 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:07:11.771763 systemd[1]: kubelet.service: Consumed 141ms CPU time, 84.8M memory peak. May 15 00:07:11.777525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:07:13.328009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:07:13.341881 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:07:13.402096 kubelet[2359]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:07:13.402096 kubelet[2359]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:07:13.402096 kubelet[2359]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:07:13.402697 kubelet[2359]: I0515 00:07:13.402183 2359 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:07:14.179569 kubelet[2359]: I0515 00:07:14.179508 2359 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 00:07:14.179569 kubelet[2359]: I0515 00:07:14.179546 2359 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:07:14.179924 kubelet[2359]: I0515 00:07:14.179876 2359 server.go:927] "Client rotation is on, will bootstrap in background" May 15 00:07:14.205721 kubelet[2359]: I0515 00:07:14.204574 2359 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:07:14.205721 kubelet[2359]: E0515 00:07:14.205616 2359 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:14.227359 kubelet[2359]: I0515 00:07:14.227317 2359 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:07:14.231606 kubelet[2359]: I0515 00:07:14.231483 2359 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:07:14.231880 kubelet[2359]: I0515 00:07:14.231560 2359 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 00:07:14.232127 kubelet[2359]: I0515 00:07:14.231888 2359 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:07:14.232127 kubelet[2359]: I0515 00:07:14.231909 2359 container_manager_linux.go:301] "Creating device plugin manager" May 15 00:07:14.232127 kubelet[2359]: I0515 00:07:14.232118 2359 state_mem.go:36] "Initialized new in-memory state store" May 15 00:07:14.233388 kubelet[2359]: I0515 00:07:14.233356 2359 kubelet.go:400] "Attempting to sync node with API server" May 15 00:07:14.233388 kubelet[2359]: I0515 00:07:14.233389 2359 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:07:14.233745 kubelet[2359]: I0515 00:07:14.233427 2359 kubelet.go:312] "Adding apiserver pod source" May 15 00:07:14.233745 kubelet[2359]: I0515 00:07:14.233454 2359 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:07:14.240639 kubelet[2359]: W0515 00:07:14.240276 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:14.240840 kubelet[2359]: E0515 00:07:14.240821 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:14.241150 kubelet[2359]: I0515 00:07:14.241082 2359 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 00:07:14.244065 kubelet[2359]: I0515 00:07:14.243288 2359 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:07:14.244065 kubelet[2359]: W0515 00:07:14.243369 2359 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:07:14.244211 kubelet[2359]: I0515 00:07:14.244195 2359 server.go:1264] "Started kubelet" May 15 00:07:14.246908 kubelet[2359]: W0515 00:07:14.246823 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826&limit=500&resourceVersion=0": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:14.247023 kubelet[2359]: E0515 00:07:14.246919 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826&limit=500&resourceVersion=0": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:14.248026 kubelet[2359]: I0515 00:07:14.247111 2359 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:07:14.248521 kubelet[2359]: I0515 00:07:14.248480 2359 server.go:455] "Adding debug handlers to kubelet server" May 15 00:07:14.251727 kubelet[2359]: I0515 00:07:14.251676 2359 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:07:14.253320 kubelet[2359]: I0515 00:07:14.253235 2359 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:07:14.253676 kubelet[2359]: I0515 00:07:14.253518 2359 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:07:14.255979 kubelet[2359]: E0515 00:07:14.254205 2359 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.108:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.108:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826.183f8aa7eaa8e95f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826,UID:ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826,},FirstTimestamp:2025-05-15 00:07:14.244151647 +0000 UTC m=+0.896227278,LastTimestamp:2025-05-15 00:07:14.244151647 +0000 UTC m=+0.896227278,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826,}" May 15 00:07:14.259509 kubelet[2359]: E0515 00:07:14.259482 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" not found" May 15 00:07:14.260329 kubelet[2359]: I0515 00:07:14.260310 2359 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 00:07:14.260665 kubelet[2359]: I0515 00:07:14.260644 2359 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:07:14.260844 kubelet[2359]: I0515 00:07:14.260828 2359 reconciler.go:26] "Reconciler: start to sync state" May 15 00:07:14.261431 kubelet[2359]: W0515 00:07:14.261377 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:14.261601 kubelet[2359]: E0515 00:07:14.261581 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:14.264320 kubelet[2359]: E0515 00:07:14.264290 2359 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:07:14.264698 kubelet[2359]: E0515 00:07:14.264638 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826?timeout=10s\": dial tcp 10.128.0.108:6443: connect: connection refused" interval="200ms" May 15 00:07:14.266431 kubelet[2359]: I0515 00:07:14.266407 2359 factory.go:221] Registration of the containerd container factory successfully May 15 00:07:14.266431 kubelet[2359]: I0515 00:07:14.266431 2359 factory.go:221] Registration of the systemd container factory successfully May 15 00:07:14.266581 kubelet[2359]: I0515 00:07:14.266550 2359 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:07:14.296423 kubelet[2359]: I0515 00:07:14.296260 2359 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:07:14.299690 kubelet[2359]: I0515 00:07:14.299132 2359 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:07:14.299690 kubelet[2359]: I0515 00:07:14.299174 2359 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:07:14.299690 kubelet[2359]: I0515 00:07:14.299201 2359 kubelet.go:2337] "Starting kubelet main sync loop" May 15 00:07:14.299690 kubelet[2359]: E0515 00:07:14.299262 2359 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:07:14.300207 kubelet[2359]: I0515 00:07:14.300138 2359 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:07:14.300207 kubelet[2359]: I0515 00:07:14.300173 2359 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:07:14.300207 kubelet[2359]: I0515 00:07:14.300212 2359 state_mem.go:36] "Initialized new in-memory state store" May 15 00:07:14.302944 kubelet[2359]: W0515 00:07:14.302903 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:14.303073 kubelet[2359]: E0515 00:07:14.302956 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:14.305512 kubelet[2359]: I0515 00:07:14.305389 2359 policy_none.go:49] "None policy: Start" May 15 00:07:14.306829 kubelet[2359]: I0515 00:07:14.306315 2359 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:07:14.306829 kubelet[2359]: I0515 00:07:14.306345 2359 state_mem.go:35] "Initializing new in-memory state store" May 15 00:07:14.316301 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 00:07:14.332001 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 00:07:14.348751 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 00:07:14.351252 kubelet[2359]: I0515 00:07:14.351211 2359 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:07:14.352348 kubelet[2359]: I0515 00:07:14.351558 2359 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:07:14.352348 kubelet[2359]: I0515 00:07:14.351778 2359 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:07:14.354466 kubelet[2359]: E0515 00:07:14.354424 2359 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" not found" May 15 00:07:14.366474 kubelet[2359]: I0515 00:07:14.366416 2359 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.367023 kubelet[2359]: E0515 00:07:14.366984 2359 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.108:6443/api/v1/nodes\": dial tcp 10.128.0.108:6443: connect: connection refused" node="ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.400430 kubelet[2359]: I0515 00:07:14.400337 2359 topology_manager.go:215] "Topology Admit Handler" podUID="757f6e6d4c00f7b86e6714d3656bdc46" podNamespace="kube-system" podName="kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.405818 kubelet[2359]: I0515 00:07:14.405756 2359 topology_manager.go:215] "Topology Admit Handler" podUID="efa7222632dc8b1b9f7ea2d38f185055" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.410308 kubelet[2359]: I0515 00:07:14.410264 2359 topology_manager.go:215] "Topology Admit Handler" podUID="e620238e44ab0b6890f6ecc88ca25343" podNamespace="kube-system" podName="kube-scheduler-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.417875 systemd[1]: Created slice kubepods-burstable-pod757f6e6d4c00f7b86e6714d3656bdc46.slice - libcontainer container kubepods-burstable-pod757f6e6d4c00f7b86e6714d3656bdc46.slice. May 15 00:07:14.433592 systemd[1]: Created slice kubepods-burstable-podefa7222632dc8b1b9f7ea2d38f185055.slice - libcontainer container kubepods-burstable-podefa7222632dc8b1b9f7ea2d38f185055.slice. May 15 00:07:14.446468 systemd[1]: Created slice kubepods-burstable-pode620238e44ab0b6890f6ecc88ca25343.slice - libcontainer container kubepods-burstable-pode620238e44ab0b6890f6ecc88ca25343.slice. May 15 00:07:14.461650 kubelet[2359]: I0515 00:07:14.461567 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/757f6e6d4c00f7b86e6714d3656bdc46-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"757f6e6d4c00f7b86e6714d3656bdc46\") " pod="kube-system/kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.466032 kubelet[2359]: E0515 00:07:14.465956 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826?timeout=10s\": dial tcp 10.128.0.108:6443: connect: connection refused" interval="400ms" May 15 00:07:14.562385 kubelet[2359]: I0515 00:07:14.562307 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/757f6e6d4c00f7b86e6714d3656bdc46-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"757f6e6d4c00f7b86e6714d3656bdc46\") " pod="kube-system/kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.562385 kubelet[2359]: I0515 00:07:14.562379 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/757f6e6d4c00f7b86e6714d3656bdc46-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"757f6e6d4c00f7b86e6714d3656bdc46\") " pod="kube-system/kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.562645 kubelet[2359]: I0515 00:07:14.562413 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/efa7222632dc8b1b9f7ea2d38f185055-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"efa7222632dc8b1b9f7ea2d38f185055\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.562645 kubelet[2359]: I0515 00:07:14.562441 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/efa7222632dc8b1b9f7ea2d38f185055-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"efa7222632dc8b1b9f7ea2d38f185055\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.562645 kubelet[2359]: I0515 00:07:14.562466 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/efa7222632dc8b1b9f7ea2d38f185055-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"efa7222632dc8b1b9f7ea2d38f185055\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.562645 kubelet[2359]: I0515 00:07:14.562493 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/efa7222632dc8b1b9f7ea2d38f185055-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"efa7222632dc8b1b9f7ea2d38f185055\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.562846 kubelet[2359]: I0515 00:07:14.562551 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/efa7222632dc8b1b9f7ea2d38f185055-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"efa7222632dc8b1b9f7ea2d38f185055\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.562846 kubelet[2359]: I0515 00:07:14.562580 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e620238e44ab0b6890f6ecc88ca25343-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"e620238e44ab0b6890f6ecc88ca25343\") " pod="kube-system/kube-scheduler-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.571816 kubelet[2359]: I0515 00:07:14.571706 2359 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.572165 kubelet[2359]: E0515 00:07:14.572121 2359 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.108:6443/api/v1/nodes\": dial tcp 10.128.0.108:6443: connect: connection refused" node="ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.730137 containerd[1478]: time="2025-05-15T00:07:14.729552517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826,Uid:757f6e6d4c00f7b86e6714d3656bdc46,Namespace:kube-system,Attempt:0,}" May 15 00:07:14.745821 containerd[1478]: time="2025-05-15T00:07:14.745768365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826,Uid:efa7222632dc8b1b9f7ea2d38f185055,Namespace:kube-system,Attempt:0,}" May 15 00:07:14.750903 containerd[1478]: time="2025-05-15T00:07:14.750736148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826,Uid:e620238e44ab0b6890f6ecc88ca25343,Namespace:kube-system,Attempt:0,}" May 15 00:07:14.867231 kubelet[2359]: E0515 00:07:14.867153 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826?timeout=10s\": dial tcp 10.128.0.108:6443: connect: connection refused" interval="800ms" May 15 00:07:14.977260 kubelet[2359]: I0515 00:07:14.977218 2359 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:14.977710 kubelet[2359]: E0515 00:07:14.977662 2359 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.108:6443/api/v1/nodes\": dial tcp 10.128.0.108:6443: connect: connection refused" node="ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:15.068384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2247322592.mount: Deactivated successfully. May 15 00:07:15.076938 containerd[1478]: time="2025-05-15T00:07:15.076878752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:07:15.082113 containerd[1478]: time="2025-05-15T00:07:15.082025324Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" May 15 00:07:15.083737 containerd[1478]: time="2025-05-15T00:07:15.083682632Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:07:15.084824 containerd[1478]: time="2025-05-15T00:07:15.084766273Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:07:15.087382 containerd[1478]: time="2025-05-15T00:07:15.087309441Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:07:15.089287 containerd[1478]: time="2025-05-15T00:07:15.089142410Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:07:15.090702 containerd[1478]: time="2025-05-15T00:07:15.090641631Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:07:15.092135 containerd[1478]: time="2025-05-15T00:07:15.092062199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:07:15.093809 containerd[1478]: time="2025-05-15T00:07:15.093145805Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 363.456052ms" May 15 00:07:15.099340 containerd[1478]: time="2025-05-15T00:07:15.099284024Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 353.395833ms" May 15 00:07:15.100758 containerd[1478]: time="2025-05-15T00:07:15.100395236Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 349.541067ms" May 15 00:07:15.231948 kubelet[2359]: W0515 00:07:15.231553 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:15.231948 kubelet[2359]: E0515 00:07:15.231643 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:15.271847 containerd[1478]: time="2025-05-15T00:07:15.271474917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:07:15.271847 containerd[1478]: time="2025-05-15T00:07:15.271566576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:07:15.271847 containerd[1478]: time="2025-05-15T00:07:15.271602033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:07:15.271847 containerd[1478]: time="2025-05-15T00:07:15.271721646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:07:15.275583 containerd[1478]: time="2025-05-15T00:07:15.270998781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:07:15.278847 containerd[1478]: time="2025-05-15T00:07:15.278473057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:07:15.278847 containerd[1478]: time="2025-05-15T00:07:15.278543794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:07:15.278847 containerd[1478]: time="2025-05-15T00:07:15.278570857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:07:15.278847 containerd[1478]: time="2025-05-15T00:07:15.278685901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:07:15.279590 containerd[1478]: time="2025-05-15T00:07:15.275439450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:07:15.279590 containerd[1478]: time="2025-05-15T00:07:15.279284418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:07:15.279590 containerd[1478]: time="2025-05-15T00:07:15.279405424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:07:15.322404 systemd[1]: Started cri-containerd-c8d46bd69779877afda2422f1151d7f3eec3fef0c7ade3797ca7d5239151f8da.scope - libcontainer container c8d46bd69779877afda2422f1151d7f3eec3fef0c7ade3797ca7d5239151f8da. May 15 00:07:15.334987 systemd[1]: Started cri-containerd-755a8bd23a9ab2a36550562155050e3c0977a65a0dc32933ef7003ff9ce35ed5.scope - libcontainer container 755a8bd23a9ab2a36550562155050e3c0977a65a0dc32933ef7003ff9ce35ed5. May 15 00:07:15.338804 systemd[1]: Started cri-containerd-a03849867348f8889c1a544dc3f6bf00abdc792414c09dae0bb25ef3a2fcb03d.scope - libcontainer container a03849867348f8889c1a544dc3f6bf00abdc792414c09dae0bb25ef3a2fcb03d. May 15 00:07:15.441913 containerd[1478]: time="2025-05-15T00:07:15.441719522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826,Uid:efa7222632dc8b1b9f7ea2d38f185055,Namespace:kube-system,Attempt:0,} returns sandbox id \"755a8bd23a9ab2a36550562155050e3c0977a65a0dc32933ef7003ff9ce35ed5\"" May 15 00:07:15.450238 kubelet[2359]: E0515 00:07:15.450149 2359 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb" May 15 00:07:15.460101 containerd[1478]: time="2025-05-15T00:07:15.458740780Z" level=info msg="CreateContainer within sandbox \"755a8bd23a9ab2a36550562155050e3c0977a65a0dc32933ef7003ff9ce35ed5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 00:07:15.462507 containerd[1478]: time="2025-05-15T00:07:15.462352465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826,Uid:757f6e6d4c00f7b86e6714d3656bdc46,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8d46bd69779877afda2422f1151d7f3eec3fef0c7ade3797ca7d5239151f8da\"" May 15 00:07:15.465436 kubelet[2359]: E0515 00:07:15.465392 2359 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672" May 15 00:07:15.469332 containerd[1478]: time="2025-05-15T00:07:15.469293398Z" level=info msg="CreateContainer within sandbox \"c8d46bd69779877afda2422f1151d7f3eec3fef0c7ade3797ca7d5239151f8da\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 00:07:15.471399 containerd[1478]: time="2025-05-15T00:07:15.471339985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826,Uid:e620238e44ab0b6890f6ecc88ca25343,Namespace:kube-system,Attempt:0,} returns sandbox id \"a03849867348f8889c1a544dc3f6bf00abdc792414c09dae0bb25ef3a2fcb03d\"" May 15 00:07:15.475427 kubelet[2359]: E0515 00:07:15.475388 2359 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672" May 15 00:07:15.478394 containerd[1478]: time="2025-05-15T00:07:15.478312162Z" level=info msg="CreateContainer within sandbox \"a03849867348f8889c1a544dc3f6bf00abdc792414c09dae0bb25ef3a2fcb03d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 00:07:15.487260 containerd[1478]: time="2025-05-15T00:07:15.487165811Z" level=info msg="CreateContainer within sandbox \"755a8bd23a9ab2a36550562155050e3c0977a65a0dc32933ef7003ff9ce35ed5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"033ae6f2d120fda0f20f5ecb32e75cfe63246ac10ad3e08bbe42403f99b43254\"" May 15 00:07:15.488176 containerd[1478]: time="2025-05-15T00:07:15.488138999Z" level=info msg="StartContainer for \"033ae6f2d120fda0f20f5ecb32e75cfe63246ac10ad3e08bbe42403f99b43254\"" May 15 00:07:15.513074 containerd[1478]: time="2025-05-15T00:07:15.512587911Z" level=info msg="CreateContainer within sandbox \"c8d46bd69779877afda2422f1151d7f3eec3fef0c7ade3797ca7d5239151f8da\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d74ce73566457bfa3af4d308600668a72596fda21f32c1e2de9c14e7ed947c2d\"" May 15 00:07:15.515072 containerd[1478]: time="2025-05-15T00:07:15.513930981Z" level=info msg="StartContainer for \"d74ce73566457bfa3af4d308600668a72596fda21f32c1e2de9c14e7ed947c2d\"" May 15 00:07:15.517172 containerd[1478]: time="2025-05-15T00:07:15.517133792Z" level=info msg="CreateContainer within sandbox \"a03849867348f8889c1a544dc3f6bf00abdc792414c09dae0bb25ef3a2fcb03d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0c9910b472ebf9ba489a505c7f6b055a6227018c7ae5ea38e1eb835e9008c9ed\"" May 15 00:07:15.517944 containerd[1478]: time="2025-05-15T00:07:15.517915494Z" level=info msg="StartContainer for \"0c9910b472ebf9ba489a505c7f6b055a6227018c7ae5ea38e1eb835e9008c9ed\"" May 15 00:07:15.541859 systemd[1]: Started cri-containerd-033ae6f2d120fda0f20f5ecb32e75cfe63246ac10ad3e08bbe42403f99b43254.scope - libcontainer container 033ae6f2d120fda0f20f5ecb32e75cfe63246ac10ad3e08bbe42403f99b43254. May 15 00:07:15.544545 kubelet[2359]: W0515 00:07:15.544465 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826&limit=500&resourceVersion=0": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:15.544545 kubelet[2359]: E0515 00:07:15.544561 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826&limit=500&resourceVersion=0": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:15.545183 kubelet[2359]: W0515 00:07:15.545138 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:15.545357 kubelet[2359]: E0515 00:07:15.545318 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:15.587389 systemd[1]: Started cri-containerd-0c9910b472ebf9ba489a505c7f6b055a6227018c7ae5ea38e1eb835e9008c9ed.scope - libcontainer container 0c9910b472ebf9ba489a505c7f6b055a6227018c7ae5ea38e1eb835e9008c9ed. May 15 00:07:15.590877 systemd[1]: Started cri-containerd-d74ce73566457bfa3af4d308600668a72596fda21f32c1e2de9c14e7ed947c2d.scope - libcontainer container d74ce73566457bfa3af4d308600668a72596fda21f32c1e2de9c14e7ed947c2d. May 15 00:07:15.621611 kubelet[2359]: W0515 00:07:15.620652 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:15.621611 kubelet[2359]: E0515 00:07:15.620759 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.108:6443: connect: connection refused May 15 00:07:15.664581 containerd[1478]: time="2025-05-15T00:07:15.664523670Z" level=info msg="StartContainer for \"033ae6f2d120fda0f20f5ecb32e75cfe63246ac10ad3e08bbe42403f99b43254\" returns successfully" May 15 00:07:15.667694 kubelet[2359]: E0515 00:07:15.667631 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826?timeout=10s\": dial tcp 10.128.0.108:6443: connect: connection refused" interval="1.6s" May 15 00:07:15.705249 containerd[1478]: time="2025-05-15T00:07:15.704116400Z" level=info msg="StartContainer for \"d74ce73566457bfa3af4d308600668a72596fda21f32c1e2de9c14e7ed947c2d\" returns successfully" May 15 00:07:15.734596 containerd[1478]: time="2025-05-15T00:07:15.734531215Z" level=info msg="StartContainer for \"0c9910b472ebf9ba489a505c7f6b055a6227018c7ae5ea38e1eb835e9008c9ed\" returns successfully" May 15 00:07:15.787995 kubelet[2359]: I0515 00:07:15.787366 2359 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:15.787995 kubelet[2359]: E0515 00:07:15.787823 2359 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.108:6443/api/v1/nodes\": dial tcp 10.128.0.108:6443: connect: connection refused" node="ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:17.395636 kubelet[2359]: I0515 00:07:17.394557 2359 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:18.999918 kubelet[2359]: E0515 00:07:18.999843 2359 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" not found" node="ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:19.194330 kubelet[2359]: I0515 00:07:19.194268 2359 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:19.242688 kubelet[2359]: I0515 00:07:19.242642 2359 apiserver.go:52] "Watching apiserver" May 15 00:07:19.261862 kubelet[2359]: I0515 00:07:19.261711 2359 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:07:19.940370 kubelet[2359]: W0515 00:07:19.939344 2359 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 15 00:07:21.296313 systemd[1]: Reload requested from client PID 2639 ('systemctl') (unit session-9.scope)... May 15 00:07:21.296337 systemd[1]: Reloading... May 15 00:07:21.463084 zram_generator::config[2687]: No configuration found. May 15 00:07:21.611676 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:07:21.785903 systemd[1]: Reloading finished in 488 ms. May 15 00:07:21.823509 kubelet[2359]: E0515 00:07:21.823276 2359 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826.183f8aa7eaa8e95f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826,UID:ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826,},FirstTimestamp:2025-05-15 00:07:14.244151647 +0000 UTC m=+0.896227278,LastTimestamp:2025-05-15 00:07:14.244151647 +0000 UTC m=+0.896227278,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826,}" May 15 00:07:21.824238 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:07:21.848029 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:07:21.848416 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:07:21.848515 systemd[1]: kubelet.service: Consumed 1.405s CPU time, 118.2M memory peak. May 15 00:07:21.853446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:07:21.878661 update_engine[1469]: I20250515 00:07:21.878598 1469 update_attempter.cc:509] Updating boot flags... May 15 00:07:21.967993 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2736) May 15 00:07:22.192477 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2740) May 15 00:07:22.227297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:07:22.254410 (kubelet)[2749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:07:22.391929 kubelet[2749]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:07:22.391929 kubelet[2749]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:07:22.391929 kubelet[2749]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:07:22.392471 kubelet[2749]: I0515 00:07:22.391984 2749 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:07:22.401484 kubelet[2749]: I0515 00:07:22.399286 2749 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 00:07:22.401484 kubelet[2749]: I0515 00:07:22.399318 2749 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:07:22.401484 kubelet[2749]: I0515 00:07:22.399582 2749 server.go:927] "Client rotation is on, will bootstrap in background" May 15 00:07:22.405306 sudo[2762]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 00:07:22.406520 sudo[2762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 00:07:22.409427 kubelet[2749]: I0515 00:07:22.409402 2749 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 00:07:22.412837 kubelet[2749]: I0515 00:07:22.412808 2749 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:07:22.424450 kubelet[2749]: I0515 00:07:22.424412 2749 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:07:22.424883 kubelet[2749]: I0515 00:07:22.424823 2749 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:07:22.425761 kubelet[2749]: I0515 00:07:22.424883 2749 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 00:07:22.425761 kubelet[2749]: I0515 00:07:22.425216 2749 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:07:22.425761 kubelet[2749]: I0515 00:07:22.425237 2749 container_manager_linux.go:301] "Creating device plugin manager" May 15 00:07:22.425761 kubelet[2749]: I0515 00:07:22.425303 2749 state_mem.go:36] "Initialized new in-memory state store" May 15 00:07:22.425761 kubelet[2749]: I0515 00:07:22.425451 2749 kubelet.go:400] "Attempting to sync node with API server" May 15 00:07:22.426176 kubelet[2749]: I0515 00:07:22.425469 2749 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:07:22.427091 kubelet[2749]: I0515 00:07:22.426289 2749 kubelet.go:312] "Adding apiserver pod source" May 15 00:07:22.427091 kubelet[2749]: I0515 00:07:22.426333 2749 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:07:22.429217 kubelet[2749]: I0515 00:07:22.429188 2749 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 00:07:22.429456 kubelet[2749]: I0515 00:07:22.429432 2749 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:07:22.430008 kubelet[2749]: I0515 00:07:22.429983 2749 server.go:1264] "Started kubelet" May 15 00:07:22.437901 kubelet[2749]: I0515 00:07:22.434657 2749 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:07:22.447093 kubelet[2749]: I0515 00:07:22.444524 2749 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:07:22.447093 kubelet[2749]: I0515 00:07:22.446173 2749 server.go:455] "Adding debug handlers to kubelet server" May 15 00:07:22.447728 kubelet[2749]: I0515 00:07:22.447664 2749 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:07:22.447959 kubelet[2749]: I0515 00:07:22.447933 2749 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:07:22.457867 kubelet[2749]: I0515 00:07:22.456812 2749 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 00:07:22.464928 kubelet[2749]: I0515 00:07:22.464376 2749 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:07:22.464928 kubelet[2749]: I0515 00:07:22.464589 2749 reconciler.go:26] "Reconciler: start to sync state" May 15 00:07:22.499510 kubelet[2749]: I0515 00:07:22.497695 2749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:07:22.502513 kubelet[2749]: I0515 00:07:22.502471 2749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:07:22.502691 kubelet[2749]: I0515 00:07:22.502528 2749 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:07:22.502691 kubelet[2749]: I0515 00:07:22.502564 2749 kubelet.go:2337] "Starting kubelet main sync loop" May 15 00:07:22.502691 kubelet[2749]: E0515 00:07:22.502630 2749 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:07:22.520874 kubelet[2749]: I0515 00:07:22.518595 2749 factory.go:221] Registration of the systemd container factory successfully May 15 00:07:22.520874 kubelet[2749]: I0515 00:07:22.518735 2749 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:07:22.538983 kubelet[2749]: I0515 00:07:22.538942 2749 factory.go:221] Registration of the containerd container factory successfully May 15 00:07:22.557856 kubelet[2749]: E0515 00:07:22.556817 2749 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:07:22.565960 kubelet[2749]: E0515 00:07:22.565928 2749 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" May 15 00:07:22.571933 kubelet[2749]: I0515 00:07:22.569617 2749 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:22.585201 kubelet[2749]: I0515 00:07:22.585168 2749 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:22.586649 kubelet[2749]: I0515 00:07:22.585461 2749 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:22.602837 kubelet[2749]: E0515 00:07:22.602807 2749 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 00:07:22.653221 kubelet[2749]: I0515 00:07:22.653185 2749 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:07:22.653221 kubelet[2749]: I0515 00:07:22.653213 2749 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:07:22.653449 kubelet[2749]: I0515 00:07:22.653240 2749 state_mem.go:36] "Initialized new in-memory state store" May 15 00:07:22.653504 kubelet[2749]: I0515 00:07:22.653487 2749 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 00:07:22.653562 kubelet[2749]: I0515 00:07:22.653505 2749 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 00:07:22.653562 kubelet[2749]: I0515 00:07:22.653553 2749 policy_none.go:49] "None policy: Start" May 15 00:07:22.654504 kubelet[2749]: I0515 00:07:22.654475 2749 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:07:22.654630 kubelet[2749]: I0515 00:07:22.654514 2749 state_mem.go:35] "Initializing new in-memory state store" May 15 00:07:22.655526 kubelet[2749]: I0515 00:07:22.654889 2749 state_mem.go:75] "Updated machine memory state" May 15 00:07:22.665712 kubelet[2749]: I0515 00:07:22.665333 2749 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:07:22.665712 kubelet[2749]: I0515 00:07:22.665545 2749 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:07:22.670997 kubelet[2749]: I0515 00:07:22.670969 2749 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:07:22.804485 kubelet[2749]: I0515 00:07:22.804341 2749 topology_manager.go:215] "Topology Admit Handler" podUID="757f6e6d4c00f7b86e6714d3656bdc46" podNamespace="kube-system" podName="kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:22.804666 kubelet[2749]: I0515 00:07:22.804487 2749 topology_manager.go:215] "Topology Admit Handler" podUID="efa7222632dc8b1b9f7ea2d38f185055" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:22.804666 kubelet[2749]: I0515 00:07:22.804578 2749 topology_manager.go:215] "Topology Admit Handler" podUID="e620238e44ab0b6890f6ecc88ca25343" podNamespace="kube-system" podName="kube-scheduler-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:22.815295 kubelet[2749]: W0515 00:07:22.815174 2749 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 15 00:07:22.817972 kubelet[2749]: W0515 00:07:22.817521 2749 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 15 00:07:22.821801 kubelet[2749]: W0515 00:07:22.821770 2749 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 15 00:07:22.821940 kubelet[2749]: E0515 00:07:22.821852 2749 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" already exists" pod="kube-system/kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:22.869093 kubelet[2749]: I0515 00:07:22.868592 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/efa7222632dc8b1b9f7ea2d38f185055-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"efa7222632dc8b1b9f7ea2d38f185055\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:22.869093 kubelet[2749]: I0515 00:07:22.868652 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/efa7222632dc8b1b9f7ea2d38f185055-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"efa7222632dc8b1b9f7ea2d38f185055\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:22.869093 kubelet[2749]: I0515 00:07:22.868691 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/efa7222632dc8b1b9f7ea2d38f185055-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"efa7222632dc8b1b9f7ea2d38f185055\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:22.869093 kubelet[2749]: I0515 00:07:22.868728 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/757f6e6d4c00f7b86e6714d3656bdc46-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"757f6e6d4c00f7b86e6714d3656bdc46\") " pod="kube-system/kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:22.869451 kubelet[2749]: I0515 00:07:22.868755 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/757f6e6d4c00f7b86e6714d3656bdc46-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"757f6e6d4c00f7b86e6714d3656bdc46\") " pod="kube-system/kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:22.869451 kubelet[2749]: I0515 00:07:22.868785 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/757f6e6d4c00f7b86e6714d3656bdc46-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"757f6e6d4c00f7b86e6714d3656bdc46\") " pod="kube-system/kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:22.869451 kubelet[2749]: I0515 00:07:22.868822 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/efa7222632dc8b1b9f7ea2d38f185055-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"efa7222632dc8b1b9f7ea2d38f185055\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:22.869451 kubelet[2749]: I0515 00:07:22.868853 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/efa7222632dc8b1b9f7ea2d38f185055-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"efa7222632dc8b1b9f7ea2d38f185055\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:22.869697 kubelet[2749]: I0515 00:07:22.868882 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e620238e44ab0b6890f6ecc88ca25343-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" (UID: \"e620238e44ab0b6890f6ecc88ca25343\") " pod="kube-system/kube-scheduler-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:23.270218 sudo[2762]: pam_unix(sudo:session): session closed for user root May 15 00:07:23.443821 kubelet[2749]: I0515 00:07:23.443755 2749 apiserver.go:52] "Watching apiserver" May 15 00:07:23.465058 kubelet[2749]: I0515 00:07:23.464985 2749 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:07:23.619360 kubelet[2749]: W0515 00:07:23.619290 2749 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 15 00:07:23.619940 kubelet[2749]: E0515 00:07:23.619608 2749 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" already exists" pod="kube-system/kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" May 15 00:07:23.661475 kubelet[2749]: I0515 00:07:23.661015 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" podStartSLOduration=1.660968616 podStartE2EDuration="1.660968616s" podCreationTimestamp="2025-05-15 00:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:07:23.643012941 +0000 UTC m=+1.351143950" watchObservedRunningTime="2025-05-15 00:07:23.660968616 +0000 UTC m=+1.369099612" May 15 00:07:23.676540 kubelet[2749]: I0515 00:07:23.676240 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" podStartSLOduration=4.676204066 podStartE2EDuration="4.676204066s" podCreationTimestamp="2025-05-15 00:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:07:23.662382108 +0000 UTC m=+1.370513120" watchObservedRunningTime="2025-05-15 00:07:23.676204066 +0000 UTC m=+1.384335067" May 15 00:07:23.676540 kubelet[2749]: I0515 00:07:23.676387 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" podStartSLOduration=1.676381622 podStartE2EDuration="1.676381622s" podCreationTimestamp="2025-05-15 00:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:07:23.674103099 +0000 UTC m=+1.382234110" watchObservedRunningTime="2025-05-15 00:07:23.676381622 +0000 UTC m=+1.384512632" May 15 00:07:24.938932 sudo[1765]: pam_unix(sudo:session): session closed for user root May 15 00:07:24.981424 sshd[1764]: Connection closed by 147.75.109.163 port 38848 May 15 00:07:24.982481 sshd-session[1762]: pam_unix(sshd:session): session closed for user core May 15 00:07:24.989066 systemd[1]: sshd@8-10.128.0.108:22-147.75.109.163:38848.service: Deactivated successfully. May 15 00:07:24.992115 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:07:24.992596 systemd[1]: session-9.scope: Consumed 6.265s CPU time, 294.2M memory peak. May 15 00:07:24.994591 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. May 15 00:07:24.996130 systemd-logind[1458]: Removed session 9. May 15 00:07:36.275512 kubelet[2749]: I0515 00:07:36.275453 2749 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 00:07:36.276161 containerd[1478]: time="2025-05-15T00:07:36.275994780Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:07:36.276593 kubelet[2749]: I0515 00:07:36.276508 2749 topology_manager.go:215] "Topology Admit Handler" podUID="35518fdf-837b-41ad-bc21-8dfe0c0c21a5" podNamespace="kube-system" podName="kube-proxy-f9j76" May 15 00:07:36.277629 kubelet[2749]: I0515 00:07:36.277591 2749 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 00:07:36.295899 systemd[1]: Created slice kubepods-besteffort-pod35518fdf_837b_41ad_bc21_8dfe0c0c21a5.slice - libcontainer container kubepods-besteffort-pod35518fdf_837b_41ad_bc21_8dfe0c0c21a5.slice. May 15 00:07:36.323099 kubelet[2749]: I0515 00:07:36.319736 2749 topology_manager.go:215] "Topology Admit Handler" podUID="5e45a971-439f-4641-ad69-ac96cf87af9a" podNamespace="kube-system" podName="cilium-r8b6b" May 15 00:07:36.335103 systemd[1]: Created slice kubepods-burstable-pod5e45a971_439f_4641_ad69_ac96cf87af9a.slice - libcontainer container kubepods-burstable-pod5e45a971_439f_4641_ad69_ac96cf87af9a.slice. May 15 00:07:36.355249 kubelet[2749]: I0515 00:07:36.355140 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-cilium-run\") pod \"cilium-r8b6b\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " pod="kube-system/cilium-r8b6b" May 15 00:07:36.355436 kubelet[2749]: I0515 00:07:36.355289 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-hostproc\") pod \"cilium-r8b6b\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " pod="kube-system/cilium-r8b6b" May 15 00:07:36.355436 kubelet[2749]: I0515 00:07:36.355325 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-lib-modules\") pod \"cilium-r8b6b\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " pod="kube-system/cilium-r8b6b" May 15 00:07:36.355436 kubelet[2749]: I0515 00:07:36.355352 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/35518fdf-837b-41ad-bc21-8dfe0c0c21a5-kube-proxy\") pod \"kube-proxy-f9j76\" (UID: \"35518fdf-837b-41ad-bc21-8dfe0c0c21a5\") " pod="kube-system/kube-proxy-f9j76" May 15 00:07:36.355436 kubelet[2749]: I0515 00:07:36.355397 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35518fdf-837b-41ad-bc21-8dfe0c0c21a5-xtables-lock\") pod \"kube-proxy-f9j76\" (UID: \"35518fdf-837b-41ad-bc21-8dfe0c0c21a5\") " pod="kube-system/kube-proxy-f9j76" May 15 00:07:36.355436 kubelet[2749]: I0515 00:07:36.355424 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-bpf-maps\") pod \"cilium-r8b6b\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " pod="kube-system/cilium-r8b6b" May 15 00:07:36.355718 kubelet[2749]: I0515 00:07:36.355482 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-etc-cni-netd\") pod \"cilium-r8b6b\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " pod="kube-system/cilium-r8b6b" May 15 00:07:36.355718 kubelet[2749]: I0515 00:07:36.355512 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e45a971-439f-4641-ad69-ac96cf87af9a-cilium-config-path\") pod \"cilium-r8b6b\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " pod="kube-system/cilium-r8b6b" May 15 00:07:36.355718 kubelet[2749]: I0515 00:07:36.355551 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e45a971-439f-4641-ad69-ac96cf87af9a-hubble-tls\") pod \"cilium-r8b6b\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " pod="kube-system/cilium-r8b6b" May 15 00:07:36.355718 kubelet[2749]: I0515 00:07:36.355579 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-cilium-cgroup\") pod \"cilium-r8b6b\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " pod="kube-system/cilium-r8b6b" May 15 00:07:36.355718 kubelet[2749]: I0515 00:07:36.355617 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35518fdf-837b-41ad-bc21-8dfe0c0c21a5-lib-modules\") pod \"kube-proxy-f9j76\" (UID: \"35518fdf-837b-41ad-bc21-8dfe0c0c21a5\") " pod="kube-system/kube-proxy-f9j76" May 15 00:07:36.355718 kubelet[2749]: I0515 00:07:36.355645 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-host-proc-sys-net\") pod \"cilium-r8b6b\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " pod="kube-system/cilium-r8b6b" May 15 00:07:36.356007 kubelet[2749]: I0515 00:07:36.355688 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqvd4\" (UniqueName: \"kubernetes.io/projected/5e45a971-439f-4641-ad69-ac96cf87af9a-kube-api-access-gqvd4\") pod \"cilium-r8b6b\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " pod="kube-system/cilium-r8b6b" May 15 00:07:36.356007 kubelet[2749]: I0515 00:07:36.355719 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml9fx\" (UniqueName: \"kubernetes.io/projected/35518fdf-837b-41ad-bc21-8dfe0c0c21a5-kube-api-access-ml9fx\") pod \"kube-proxy-f9j76\" (UID: \"35518fdf-837b-41ad-bc21-8dfe0c0c21a5\") " pod="kube-system/kube-proxy-f9j76" May 15 00:07:36.356007 kubelet[2749]: I0515 00:07:36.355748 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-cni-path\") pod \"cilium-r8b6b\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " pod="kube-system/cilium-r8b6b" May 15 00:07:36.356007 kubelet[2749]: I0515 00:07:36.355779 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e45a971-439f-4641-ad69-ac96cf87af9a-clustermesh-secrets\") pod \"cilium-r8b6b\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " pod="kube-system/cilium-r8b6b" May 15 00:07:36.356007 kubelet[2749]: I0515 00:07:36.355808 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-xtables-lock\") pod \"cilium-r8b6b\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " pod="kube-system/cilium-r8b6b" May 15 00:07:36.356507 kubelet[2749]: I0515 00:07:36.355834 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-host-proc-sys-kernel\") pod \"cilium-r8b6b\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " pod="kube-system/cilium-r8b6b" May 15 00:07:36.493442 kubelet[2749]: E0515 00:07:36.493396 2749 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 15 00:07:36.493442 kubelet[2749]: E0515 00:07:36.493448 2749 projected.go:200] Error preparing data for projected volume kube-api-access-ml9fx for pod kube-system/kube-proxy-f9j76: configmap "kube-root-ca.crt" not found May 15 00:07:36.493655 kubelet[2749]: E0515 00:07:36.493545 2749 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35518fdf-837b-41ad-bc21-8dfe0c0c21a5-kube-api-access-ml9fx podName:35518fdf-837b-41ad-bc21-8dfe0c0c21a5 nodeName:}" failed. No retries permitted until 2025-05-15 00:07:36.99351478 +0000 UTC m=+14.701645783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ml9fx" (UniqueName: "kubernetes.io/projected/35518fdf-837b-41ad-bc21-8dfe0c0c21a5-kube-api-access-ml9fx") pod "kube-proxy-f9j76" (UID: "35518fdf-837b-41ad-bc21-8dfe0c0c21a5") : configmap "kube-root-ca.crt" not found May 15 00:07:36.495860 kubelet[2749]: E0515 00:07:36.495818 2749 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 15 00:07:36.495860 kubelet[2749]: E0515 00:07:36.495850 2749 projected.go:200] Error preparing data for projected volume kube-api-access-gqvd4 for pod kube-system/cilium-r8b6b: configmap "kube-root-ca.crt" not found May 15 00:07:36.496064 kubelet[2749]: E0515 00:07:36.495949 2749 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e45a971-439f-4641-ad69-ac96cf87af9a-kube-api-access-gqvd4 podName:5e45a971-439f-4641-ad69-ac96cf87af9a nodeName:}" failed. No retries permitted until 2025-05-15 00:07:36.995926475 +0000 UTC m=+14.704057463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gqvd4" (UniqueName: "kubernetes.io/projected/5e45a971-439f-4641-ad69-ac96cf87af9a-kube-api-access-gqvd4") pod "cilium-r8b6b" (UID: "5e45a971-439f-4641-ad69-ac96cf87af9a") : configmap "kube-root-ca.crt" not found May 15 00:07:37.063813 kubelet[2749]: E0515 00:07:37.063505 2749 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 15 00:07:37.063813 kubelet[2749]: E0515 00:07:37.063567 2749 projected.go:200] Error preparing data for projected volume kube-api-access-ml9fx for pod kube-system/kube-proxy-f9j76: configmap "kube-root-ca.crt" not found May 15 00:07:37.063813 kubelet[2749]: E0515 00:07:37.063637 2749 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35518fdf-837b-41ad-bc21-8dfe0c0c21a5-kube-api-access-ml9fx podName:35518fdf-837b-41ad-bc21-8dfe0c0c21a5 nodeName:}" failed. No retries permitted until 2025-05-15 00:07:38.063613957 +0000 UTC m=+15.771744964 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ml9fx" (UniqueName: "kubernetes.io/projected/35518fdf-837b-41ad-bc21-8dfe0c0c21a5-kube-api-access-ml9fx") pod "kube-proxy-f9j76" (UID: "35518fdf-837b-41ad-bc21-8dfe0c0c21a5") : configmap "kube-root-ca.crt" not found May 15 00:07:37.064248 kubelet[2749]: E0515 00:07:37.064168 2749 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 15 00:07:37.064248 kubelet[2749]: E0515 00:07:37.064197 2749 projected.go:200] Error preparing data for projected volume kube-api-access-gqvd4 for pod kube-system/cilium-r8b6b: configmap "kube-root-ca.crt" not found May 15 00:07:37.064248 kubelet[2749]: E0515 00:07:37.064250 2749 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e45a971-439f-4641-ad69-ac96cf87af9a-kube-api-access-gqvd4 podName:5e45a971-439f-4641-ad69-ac96cf87af9a nodeName:}" failed. No retries permitted until 2025-05-15 00:07:38.064231768 +0000 UTC m=+15.772362767 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gqvd4" (UniqueName: "kubernetes.io/projected/5e45a971-439f-4641-ad69-ac96cf87af9a-kube-api-access-gqvd4") pod "cilium-r8b6b" (UID: "5e45a971-439f-4641-ad69-ac96cf87af9a") : configmap "kube-root-ca.crt" not found May 15 00:07:37.302986 kubelet[2749]: I0515 00:07:37.302926 2749 topology_manager.go:215] "Topology Admit Handler" podUID="d070d3e1-9867-4e0b-aeb1-929e35c0ae2b" podNamespace="kube-system" podName="cilium-operator-599987898-2prds" May 15 00:07:37.313749 systemd[1]: Created slice kubepods-besteffort-podd070d3e1_9867_4e0b_aeb1_929e35c0ae2b.slice - libcontainer container kubepods-besteffort-podd070d3e1_9867_4e0b_aeb1_929e35c0ae2b.slice. May 15 00:07:37.366536 kubelet[2749]: I0515 00:07:37.366472 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jldw\" (UniqueName: \"kubernetes.io/projected/d070d3e1-9867-4e0b-aeb1-929e35c0ae2b-kube-api-access-8jldw\") pod \"cilium-operator-599987898-2prds\" (UID: \"d070d3e1-9867-4e0b-aeb1-929e35c0ae2b\") " pod="kube-system/cilium-operator-599987898-2prds" May 15 00:07:37.366739 kubelet[2749]: I0515 00:07:37.366548 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d070d3e1-9867-4e0b-aeb1-929e35c0ae2b-cilium-config-path\") pod \"cilium-operator-599987898-2prds\" (UID: \"d070d3e1-9867-4e0b-aeb1-929e35c0ae2b\") " pod="kube-system/cilium-operator-599987898-2prds" May 15 00:07:37.620374 containerd[1478]: time="2025-05-15T00:07:37.620192137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-2prds,Uid:d070d3e1-9867-4e0b-aeb1-929e35c0ae2b,Namespace:kube-system,Attempt:0,}" May 15 00:07:37.662102 containerd[1478]: time="2025-05-15T00:07:37.661704029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:07:37.662102 containerd[1478]: time="2025-05-15T00:07:37.661779621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:07:37.662102 containerd[1478]: time="2025-05-15T00:07:37.661807935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:07:37.662396 containerd[1478]: time="2025-05-15T00:07:37.662178866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:07:37.699341 systemd[1]: Started cri-containerd-aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b.scope - libcontainer container aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b. May 15 00:07:37.756658 containerd[1478]: time="2025-05-15T00:07:37.756022126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-2prds,Uid:d070d3e1-9867-4e0b-aeb1-929e35c0ae2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b\"" May 15 00:07:37.759366 containerd[1478]: time="2025-05-15T00:07:37.759299113Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 00:07:38.107869 containerd[1478]: time="2025-05-15T00:07:38.107786440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f9j76,Uid:35518fdf-837b-41ad-bc21-8dfe0c0c21a5,Namespace:kube-system,Attempt:0,}" May 15 00:07:38.139768 containerd[1478]: time="2025-05-15T00:07:38.138824900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:07:38.139768 containerd[1478]: time="2025-05-15T00:07:38.139704977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:07:38.139768 containerd[1478]: time="2025-05-15T00:07:38.139730583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:07:38.140401 containerd[1478]: time="2025-05-15T00:07:38.139874341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:07:38.145161 containerd[1478]: time="2025-05-15T00:07:38.144477246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r8b6b,Uid:5e45a971-439f-4641-ad69-ac96cf87af9a,Namespace:kube-system,Attempt:0,}" May 15 00:07:38.168850 systemd[1]: Started cri-containerd-3fbc6f058c4417a9f17a66511b5f1c97718b622bd801668d2974f01f90fe11ac.scope - libcontainer container 3fbc6f058c4417a9f17a66511b5f1c97718b622bd801668d2974f01f90fe11ac. May 15 00:07:38.200120 containerd[1478]: time="2025-05-15T00:07:38.199927252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:07:38.200120 containerd[1478]: time="2025-05-15T00:07:38.200026587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:07:38.202581 containerd[1478]: time="2025-05-15T00:07:38.202085000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:07:38.202581 containerd[1478]: time="2025-05-15T00:07:38.202227058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:07:38.213639 containerd[1478]: time="2025-05-15T00:07:38.213590092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f9j76,Uid:35518fdf-837b-41ad-bc21-8dfe0c0c21a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fbc6f058c4417a9f17a66511b5f1c97718b622bd801668d2974f01f90fe11ac\"" May 15 00:07:38.219991 containerd[1478]: time="2025-05-15T00:07:38.219938092Z" level=info msg="CreateContainer within sandbox \"3fbc6f058c4417a9f17a66511b5f1c97718b622bd801668d2974f01f90fe11ac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:07:38.239314 systemd[1]: Started cri-containerd-d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00.scope - libcontainer container d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00. May 15 00:07:38.255541 containerd[1478]: time="2025-05-15T00:07:38.255023559Z" level=info msg="CreateContainer within sandbox \"3fbc6f058c4417a9f17a66511b5f1c97718b622bd801668d2974f01f90fe11ac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"710517acfcd691a9b666f78aefba4b0fcbb98b850b349e01b0812c4c80b2889b\"" May 15 00:07:38.258097 containerd[1478]: time="2025-05-15T00:07:38.257940077Z" level=info msg="StartContainer for \"710517acfcd691a9b666f78aefba4b0fcbb98b850b349e01b0812c4c80b2889b\"" May 15 00:07:38.291149 containerd[1478]: time="2025-05-15T00:07:38.291003747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r8b6b,Uid:5e45a971-439f-4641-ad69-ac96cf87af9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\"" May 15 00:07:38.313296 systemd[1]: Started cri-containerd-710517acfcd691a9b666f78aefba4b0fcbb98b850b349e01b0812c4c80b2889b.scope - libcontainer container 710517acfcd691a9b666f78aefba4b0fcbb98b850b349e01b0812c4c80b2889b. May 15 00:07:38.357162 containerd[1478]: time="2025-05-15T00:07:38.356103925Z" level=info msg="StartContainer for \"710517acfcd691a9b666f78aefba4b0fcbb98b850b349e01b0812c4c80b2889b\" returns successfully" May 15 00:07:38.824561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542563223.mount: Deactivated successfully. May 15 00:07:39.682728 containerd[1478]: time="2025-05-15T00:07:39.682661184Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:39.684083 containerd[1478]: time="2025-05-15T00:07:39.684005424Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 15 00:07:39.685216 containerd[1478]: time="2025-05-15T00:07:39.685131629Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:39.687555 containerd[1478]: time="2025-05-15T00:07:39.686960656Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.927587342s" May 15 00:07:39.687555 containerd[1478]: time="2025-05-15T00:07:39.687009337Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 00:07:39.688612 containerd[1478]: time="2025-05-15T00:07:39.688581928Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 00:07:39.690507 containerd[1478]: time="2025-05-15T00:07:39.690214598Z" level=info msg="CreateContainer within sandbox \"aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 00:07:39.712327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3222001088.mount: Deactivated successfully. May 15 00:07:39.717940 containerd[1478]: time="2025-05-15T00:07:39.717878466Z" level=info msg="CreateContainer within sandbox \"aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9\"" May 15 00:07:39.719066 containerd[1478]: time="2025-05-15T00:07:39.718884374Z" level=info msg="StartContainer for \"6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9\"" May 15 00:07:39.767530 systemd[1]: run-containerd-runc-k8s.io-6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9-runc.IurUKs.mount: Deactivated successfully. May 15 00:07:39.777289 systemd[1]: Started cri-containerd-6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9.scope - libcontainer container 6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9. May 15 00:07:39.813083 containerd[1478]: time="2025-05-15T00:07:39.812974944Z" level=info msg="StartContainer for \"6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9\" returns successfully" May 15 00:07:40.865518 kubelet[2749]: I0515 00:07:40.865296 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-2prds" podStartSLOduration=1.9350200119999998 podStartE2EDuration="3.86503088s" podCreationTimestamp="2025-05-15 00:07:37 +0000 UTC" firstStartedPulling="2025-05-15 00:07:37.758318018 +0000 UTC m=+15.466449016" lastFinishedPulling="2025-05-15 00:07:39.688328884 +0000 UTC m=+17.396459884" observedRunningTime="2025-05-15 00:07:40.863015976 +0000 UTC m=+18.571146983" watchObservedRunningTime="2025-05-15 00:07:40.86503088 +0000 UTC m=+18.573161891" May 15 00:07:40.867666 kubelet[2749]: I0515 00:07:40.866266 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f9j76" podStartSLOduration=4.866248209 podStartE2EDuration="4.866248209s" podCreationTimestamp="2025-05-15 00:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:07:38.660591356 +0000 UTC m=+16.368722365" watchObservedRunningTime="2025-05-15 00:07:40.866248209 +0000 UTC m=+18.574379219" May 15 00:07:43.794158 systemd[1]: Started sshd@10-10.128.0.108:22-45.79.181.223:56608.service - OpenSSH per-connection server daemon (45.79.181.223:56608). May 15 00:07:44.748787 sshd[3168]: Connection closed by 45.79.181.223 port 56608 [preauth] May 15 00:07:44.751682 systemd[1]: sshd@10-10.128.0.108:22-45.79.181.223:56608.service: Deactivated successfully. May 15 00:07:44.818223 systemd[1]: Started sshd@11-10.128.0.108:22-45.79.181.223:56612.service - OpenSSH per-connection server daemon (45.79.181.223:56612). May 15 00:07:45.226284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3650231146.mount: Deactivated successfully. May 15 00:07:45.744494 sshd[3173]: Connection closed by 45.79.181.223 port 56612 [preauth] May 15 00:07:45.746204 systemd[1]: sshd@11-10.128.0.108:22-45.79.181.223:56612.service: Deactivated successfully. May 15 00:07:45.807981 systemd[1]: Started sshd@12-10.128.0.108:22-45.79.181.223:9852.service - OpenSSH per-connection server daemon (45.79.181.223:9852). May 15 00:07:46.845069 sshd[3190]: Connection closed by 45.79.181.223 port 9852 [preauth] May 15 00:07:46.846228 systemd[1]: sshd@12-10.128.0.108:22-45.79.181.223:9852.service: Deactivated successfully. May 15 00:07:48.071030 containerd[1478]: time="2025-05-15T00:07:48.070933275Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:48.072454 containerd[1478]: time="2025-05-15T00:07:48.072359967Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 15 00:07:48.074168 containerd[1478]: time="2025-05-15T00:07:48.074082719Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:48.077856 containerd[1478]: time="2025-05-15T00:07:48.077811562Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.388058242s" May 15 00:07:48.077856 containerd[1478]: time="2025-05-15T00:07:48.077851852Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 00:07:48.081838 containerd[1478]: time="2025-05-15T00:07:48.081797287Z" level=info msg="CreateContainer within sandbox \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:07:48.106857 containerd[1478]: time="2025-05-15T00:07:48.106804696Z" level=info msg="CreateContainer within sandbox \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322\"" May 15 00:07:48.108224 containerd[1478]: time="2025-05-15T00:07:48.107738110Z" level=info msg="StartContainer for \"544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322\"" May 15 00:07:48.159318 systemd[1]: Started cri-containerd-544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322.scope - libcontainer container 544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322. May 15 00:07:48.200017 containerd[1478]: time="2025-05-15T00:07:48.199930917Z" level=info msg="StartContainer for \"544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322\" returns successfully" May 15 00:07:48.217949 systemd[1]: cri-containerd-544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322.scope: Deactivated successfully. May 15 00:07:49.094262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322-rootfs.mount: Deactivated successfully. May 15 00:07:50.306011 containerd[1478]: time="2025-05-15T00:07:50.305881867Z" level=info msg="shim disconnected" id=544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322 namespace=k8s.io May 15 00:07:50.306011 containerd[1478]: time="2025-05-15T00:07:50.305981861Z" level=warning msg="cleaning up after shim disconnected" id=544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322 namespace=k8s.io May 15 00:07:50.306011 containerd[1478]: time="2025-05-15T00:07:50.306000696Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:07:50.691492 containerd[1478]: time="2025-05-15T00:07:50.691429432Z" level=info msg="CreateContainer within sandbox \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:07:50.717981 containerd[1478]: time="2025-05-15T00:07:50.717781144Z" level=info msg="CreateContainer within sandbox \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3\"" May 15 00:07:50.720604 containerd[1478]: time="2025-05-15T00:07:50.718772793Z" level=info msg="StartContainer for \"d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3\"" May 15 00:07:50.785295 systemd[1]: Started cri-containerd-d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3.scope - libcontainer container d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3. May 15 00:07:50.823156 containerd[1478]: time="2025-05-15T00:07:50.823100128Z" level=info msg="StartContainer for \"d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3\" returns successfully" May 15 00:07:50.841451 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:07:50.842170 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:07:50.843126 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 00:07:50.853538 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:07:50.853936 systemd[1]: cri-containerd-d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3.scope: Deactivated successfully. May 15 00:07:50.883340 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:07:50.897144 containerd[1478]: time="2025-05-15T00:07:50.896994199Z" level=info msg="shim disconnected" id=d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3 namespace=k8s.io May 15 00:07:50.897144 containerd[1478]: time="2025-05-15T00:07:50.897144199Z" level=warning msg="cleaning up after shim disconnected" id=d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3 namespace=k8s.io May 15 00:07:50.897643 containerd[1478]: time="2025-05-15T00:07:50.897161187Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:07:51.695002 containerd[1478]: time="2025-05-15T00:07:51.694742292Z" level=info msg="CreateContainer within sandbox \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:07:51.714513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3-rootfs.mount: Deactivated successfully. May 15 00:07:51.740987 containerd[1478]: time="2025-05-15T00:07:51.740482956Z" level=info msg="CreateContainer within sandbox \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124\"" May 15 00:07:51.741950 containerd[1478]: time="2025-05-15T00:07:51.741911518Z" level=info msg="StartContainer for \"1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124\"" May 15 00:07:51.809311 systemd[1]: Started cri-containerd-1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124.scope - libcontainer container 1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124. May 15 00:07:51.857679 containerd[1478]: time="2025-05-15T00:07:51.857608225Z" level=info msg="StartContainer for \"1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124\" returns successfully" May 15 00:07:51.864263 systemd[1]: cri-containerd-1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124.scope: Deactivated successfully. May 15 00:07:51.900270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124-rootfs.mount: Deactivated successfully. May 15 00:07:51.902773 containerd[1478]: time="2025-05-15T00:07:51.902258341Z" level=info msg="shim disconnected" id=1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124 namespace=k8s.io May 15 00:07:51.902773 containerd[1478]: time="2025-05-15T00:07:51.902420618Z" level=warning msg="cleaning up after shim disconnected" id=1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124 namespace=k8s.io May 15 00:07:51.902773 containerd[1478]: time="2025-05-15T00:07:51.902440347Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:07:52.702309 containerd[1478]: time="2025-05-15T00:07:52.701892747Z" level=info msg="CreateContainer within sandbox \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:07:52.733107 containerd[1478]: time="2025-05-15T00:07:52.732949313Z" level=info msg="CreateContainer within sandbox \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9\"" May 15 00:07:52.734587 containerd[1478]: time="2025-05-15T00:07:52.734363961Z" level=info msg="StartContainer for \"2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9\"" May 15 00:07:52.793278 systemd[1]: Started cri-containerd-2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9.scope - libcontainer container 2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9. May 15 00:07:52.836409 systemd[1]: cri-containerd-2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9.scope: Deactivated successfully. May 15 00:07:52.843121 containerd[1478]: time="2025-05-15T00:07:52.842881113Z" level=info msg="StartContainer for \"2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9\" returns successfully" May 15 00:07:52.874186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9-rootfs.mount: Deactivated successfully. May 15 00:07:52.877008 containerd[1478]: time="2025-05-15T00:07:52.876878604Z" level=info msg="shim disconnected" id=2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9 namespace=k8s.io May 15 00:07:52.877008 containerd[1478]: time="2025-05-15T00:07:52.876953115Z" level=warning msg="cleaning up after shim disconnected" id=2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9 namespace=k8s.io May 15 00:07:52.877008 containerd[1478]: time="2025-05-15T00:07:52.876969819Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:07:53.707263 containerd[1478]: time="2025-05-15T00:07:53.706945418Z" level=info msg="CreateContainer within sandbox \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:07:53.747392 containerd[1478]: time="2025-05-15T00:07:53.747331303Z" level=info msg="CreateContainer within sandbox \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212\"" May 15 00:07:53.749960 containerd[1478]: time="2025-05-15T00:07:53.748533369Z" level=info msg="StartContainer for \"3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212\"" May 15 00:07:53.805378 systemd[1]: Started cri-containerd-3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212.scope - libcontainer container 3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212. May 15 00:07:53.853320 containerd[1478]: time="2025-05-15T00:07:53.853264453Z" level=info msg="StartContainer for \"3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212\" returns successfully" May 15 00:07:54.019889 kubelet[2749]: I0515 00:07:54.019739 2749 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 15 00:07:54.060400 kubelet[2749]: I0515 00:07:54.060154 2749 topology_manager.go:215] "Topology Admit Handler" podUID="f5c6f8c5-eb07-421b-8646-1c356992408a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b9gd2" May 15 00:07:54.063106 kubelet[2749]: I0515 00:07:54.061970 2749 topology_manager.go:215] "Topology Admit Handler" podUID="b4138f2c-5e2e-404a-afce-a4ad4d8affc0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ldbdw" May 15 00:07:54.081662 systemd[1]: Created slice kubepods-burstable-podb4138f2c_5e2e_404a_afce_a4ad4d8affc0.slice - libcontainer container kubepods-burstable-podb4138f2c_5e2e_404a_afce_a4ad4d8affc0.slice. May 15 00:07:54.090583 kubelet[2749]: I0515 00:07:54.089205 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x95pg\" (UniqueName: \"kubernetes.io/projected/b4138f2c-5e2e-404a-afce-a4ad4d8affc0-kube-api-access-x95pg\") pod \"coredns-7db6d8ff4d-ldbdw\" (UID: \"b4138f2c-5e2e-404a-afce-a4ad4d8affc0\") " pod="kube-system/coredns-7db6d8ff4d-ldbdw" May 15 00:07:54.090583 kubelet[2749]: I0515 00:07:54.089255 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5c6f8c5-eb07-421b-8646-1c356992408a-config-volume\") pod \"coredns-7db6d8ff4d-b9gd2\" (UID: \"f5c6f8c5-eb07-421b-8646-1c356992408a\") " pod="kube-system/coredns-7db6d8ff4d-b9gd2" May 15 00:07:54.090583 kubelet[2749]: I0515 00:07:54.089305 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4138f2c-5e2e-404a-afce-a4ad4d8affc0-config-volume\") pod \"coredns-7db6d8ff4d-ldbdw\" (UID: \"b4138f2c-5e2e-404a-afce-a4ad4d8affc0\") " pod="kube-system/coredns-7db6d8ff4d-ldbdw" May 15 00:07:54.090583 kubelet[2749]: I0515 00:07:54.089376 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnk24\" (UniqueName: \"kubernetes.io/projected/f5c6f8c5-eb07-421b-8646-1c356992408a-kube-api-access-hnk24\") pod \"coredns-7db6d8ff4d-b9gd2\" (UID: \"f5c6f8c5-eb07-421b-8646-1c356992408a\") " pod="kube-system/coredns-7db6d8ff4d-b9gd2" May 15 00:07:54.098356 systemd[1]: Created slice kubepods-burstable-podf5c6f8c5_eb07_421b_8646_1c356992408a.slice - libcontainer container kubepods-burstable-podf5c6f8c5_eb07_421b_8646_1c356992408a.slice. May 15 00:07:54.394774 containerd[1478]: time="2025-05-15T00:07:54.394711449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ldbdw,Uid:b4138f2c-5e2e-404a-afce-a4ad4d8affc0,Namespace:kube-system,Attempt:0,}" May 15 00:07:54.406870 containerd[1478]: time="2025-05-15T00:07:54.406797883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b9gd2,Uid:f5c6f8c5-eb07-421b-8646-1c356992408a,Namespace:kube-system,Attempt:0,}" May 15 00:07:54.748448 kubelet[2749]: I0515 00:07:54.747659 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r8b6b" podStartSLOduration=8.962580433 podStartE2EDuration="18.747555071s" podCreationTimestamp="2025-05-15 00:07:36 +0000 UTC" firstStartedPulling="2025-05-15 00:07:38.2938146 +0000 UTC m=+16.001945596" lastFinishedPulling="2025-05-15 00:07:48.078789233 +0000 UTC m=+25.786920234" observedRunningTime="2025-05-15 00:07:54.745686939 +0000 UTC m=+32.453817948" watchObservedRunningTime="2025-05-15 00:07:54.747555071 +0000 UTC m=+32.455686080" May 15 00:07:56.197792 systemd-networkd[1387]: cilium_host: Link UP May 15 00:07:56.199676 systemd-networkd[1387]: cilium_net: Link UP May 15 00:07:56.200020 systemd-networkd[1387]: cilium_net: Gained carrier May 15 00:07:56.200352 systemd-networkd[1387]: cilium_host: Gained carrier May 15 00:07:56.354939 systemd-networkd[1387]: cilium_vxlan: Link UP May 15 00:07:56.354951 systemd-networkd[1387]: cilium_vxlan: Gained carrier May 15 00:07:56.475342 systemd-networkd[1387]: cilium_host: Gained IPv6LL May 15 00:07:56.588913 systemd-networkd[1387]: cilium_net: Gained IPv6LL May 15 00:07:56.641249 kernel: NET: Registered PF_ALG protocol family May 15 00:07:57.508997 systemd-networkd[1387]: lxc_health: Link UP May 15 00:07:57.517326 systemd-networkd[1387]: lxc_health: Gained carrier May 15 00:07:57.787316 systemd-networkd[1387]: cilium_vxlan: Gained IPv6LL May 15 00:07:57.967694 systemd-networkd[1387]: lxcd6b69c26bec3: Link UP May 15 00:07:57.973213 kernel: eth0: renamed from tmpca3fd May 15 00:07:57.984256 systemd-networkd[1387]: lxcd6b69c26bec3: Gained carrier May 15 00:07:58.028083 kernel: eth0: renamed from tmp0653e May 15 00:07:58.029413 systemd-networkd[1387]: lxc84322787c769: Link UP May 15 00:07:58.043620 systemd-networkd[1387]: lxc84322787c769: Gained carrier May 15 00:07:59.195388 systemd-networkd[1387]: lxc_health: Gained IPv6LL May 15 00:07:59.325276 systemd-networkd[1387]: lxc84322787c769: Gained IPv6LL May 15 00:07:59.707413 systemd-networkd[1387]: lxcd6b69c26bec3: Gained IPv6LL May 15 00:08:02.054928 ntpd[1446]: Listen normally on 7 cilium_host 192.168.0.125:123 May 15 00:08:02.055756 ntpd[1446]: 15 May 00:08:02 ntpd[1446]: Listen normally on 7 cilium_host 192.168.0.125:123 May 15 00:08:02.055967 ntpd[1446]: Listen normally on 8 cilium_net [fe80::78ae:8ff:feef:4784%4]:123 May 15 00:08:02.056678 ntpd[1446]: 15 May 00:08:02 ntpd[1446]: Listen normally on 8 cilium_net [fe80::78ae:8ff:feef:4784%4]:123 May 15 00:08:02.056678 ntpd[1446]: 15 May 00:08:02 ntpd[1446]: Listen normally on 9 cilium_host [fe80::dc31:c5ff:feff:d161%5]:123 May 15 00:08:02.056678 ntpd[1446]: 15 May 00:08:02 ntpd[1446]: Listen normally on 10 cilium_vxlan [fe80::c441:b3ff:fe4b:4bdd%6]:123 May 15 00:08:02.056678 ntpd[1446]: 15 May 00:08:02 ntpd[1446]: Listen normally on 11 lxc_health [fe80::ec2a:87ff:fe60:27b0%8]:123 May 15 00:08:02.056678 ntpd[1446]: 15 May 00:08:02 ntpd[1446]: Listen normally on 12 lxcd6b69c26bec3 [fe80::b833:c7ff:fe88:d9c6%10]:123 May 15 00:08:02.056678 ntpd[1446]: 15 May 00:08:02 ntpd[1446]: Listen normally on 13 lxc84322787c769 [fe80::50bb:85ff:fe86:7f4a%12]:123 May 15 00:08:02.056108 ntpd[1446]: Listen normally on 9 cilium_host [fe80::dc31:c5ff:feff:d161%5]:123 May 15 00:08:02.056177 ntpd[1446]: Listen normally on 10 cilium_vxlan [fe80::c441:b3ff:fe4b:4bdd%6]:123 May 15 00:08:02.056234 ntpd[1446]: Listen normally on 11 lxc_health [fe80::ec2a:87ff:fe60:27b0%8]:123 May 15 00:08:02.056290 ntpd[1446]: Listen normally on 12 lxcd6b69c26bec3 [fe80::b833:c7ff:fe88:d9c6%10]:123 May 15 00:08:02.056446 ntpd[1446]: Listen normally on 13 lxc84322787c769 [fe80::50bb:85ff:fe86:7f4a%12]:123 May 15 00:08:03.229365 containerd[1478]: time="2025-05-15T00:08:03.227742272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:08:03.229365 containerd[1478]: time="2025-05-15T00:08:03.228992379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:08:03.229365 containerd[1478]: time="2025-05-15T00:08:03.229036286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:03.229365 containerd[1478]: time="2025-05-15T00:08:03.229243734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:03.234617 containerd[1478]: time="2025-05-15T00:08:03.233199736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:08:03.234617 containerd[1478]: time="2025-05-15T00:08:03.233386705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:08:03.234617 containerd[1478]: time="2025-05-15T00:08:03.233901567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:03.234617 containerd[1478]: time="2025-05-15T00:08:03.234299797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:03.318434 systemd[1]: Started cri-containerd-0653edc107a835eb6dcb33ac4a16ad802a07db4cf6fec32b533803875e9873cb.scope - libcontainer container 0653edc107a835eb6dcb33ac4a16ad802a07db4cf6fec32b533803875e9873cb. May 15 00:08:03.323716 systemd[1]: Started cri-containerd-ca3fdc12c6c2edd13d76fe09d55635505f6449c3140a39e17eb64579e6af565f.scope - libcontainer container ca3fdc12c6c2edd13d76fe09d55635505f6449c3140a39e17eb64579e6af565f. May 15 00:08:03.452907 containerd[1478]: time="2025-05-15T00:08:03.452853386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ldbdw,Uid:b4138f2c-5e2e-404a-afce-a4ad4d8affc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca3fdc12c6c2edd13d76fe09d55635505f6449c3140a39e17eb64579e6af565f\"" May 15 00:08:03.462133 containerd[1478]: time="2025-05-15T00:08:03.461851886Z" level=info msg="CreateContainer within sandbox \"ca3fdc12c6c2edd13d76fe09d55635505f6449c3140a39e17eb64579e6af565f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:08:03.489444 containerd[1478]: time="2025-05-15T00:08:03.487125611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b9gd2,Uid:f5c6f8c5-eb07-421b-8646-1c356992408a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0653edc107a835eb6dcb33ac4a16ad802a07db4cf6fec32b533803875e9873cb\"" May 15 00:08:03.498904 containerd[1478]: time="2025-05-15T00:08:03.498707394Z" level=info msg="CreateContainer within sandbox \"0653edc107a835eb6dcb33ac4a16ad802a07db4cf6fec32b533803875e9873cb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:08:03.503225 containerd[1478]: time="2025-05-15T00:08:03.503158373Z" level=info msg="CreateContainer within sandbox \"ca3fdc12c6c2edd13d76fe09d55635505f6449c3140a39e17eb64579e6af565f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e696da581321306cd734d307b5009dd160e4ed15804fc37b0630d982d8375dc\"" May 15 00:08:03.505476 containerd[1478]: time="2025-05-15T00:08:03.504275117Z" level=info msg="StartContainer for \"5e696da581321306cd734d307b5009dd160e4ed15804fc37b0630d982d8375dc\"" May 15 00:08:03.533332 containerd[1478]: time="2025-05-15T00:08:03.533254832Z" level=info msg="CreateContainer within sandbox \"0653edc107a835eb6dcb33ac4a16ad802a07db4cf6fec32b533803875e9873cb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"87cbf8087b9b31774c359356c29d32bbf68d0b76036b9529c1980369d0742497\"" May 15 00:08:03.535099 containerd[1478]: time="2025-05-15T00:08:03.534992959Z" level=info msg="StartContainer for \"87cbf8087b9b31774c359356c29d32bbf68d0b76036b9529c1980369d0742497\"" May 15 00:08:03.567348 systemd[1]: Started cri-containerd-5e696da581321306cd734d307b5009dd160e4ed15804fc37b0630d982d8375dc.scope - libcontainer container 5e696da581321306cd734d307b5009dd160e4ed15804fc37b0630d982d8375dc. May 15 00:08:03.600338 systemd[1]: Started cri-containerd-87cbf8087b9b31774c359356c29d32bbf68d0b76036b9529c1980369d0742497.scope - libcontainer container 87cbf8087b9b31774c359356c29d32bbf68d0b76036b9529c1980369d0742497. May 15 00:08:03.631376 containerd[1478]: time="2025-05-15T00:08:03.630167243Z" level=info msg="StartContainer for \"5e696da581321306cd734d307b5009dd160e4ed15804fc37b0630d982d8375dc\" returns successfully" May 15 00:08:03.664390 containerd[1478]: time="2025-05-15T00:08:03.664014216Z" level=info msg="StartContainer for \"87cbf8087b9b31774c359356c29d32bbf68d0b76036b9529c1980369d0742497\" returns successfully" May 15 00:08:03.766305 kubelet[2749]: I0515 00:08:03.765497 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ldbdw" podStartSLOduration=26.765473679 podStartE2EDuration="26.765473679s" podCreationTimestamp="2025-05-15 00:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:08:03.764155528 +0000 UTC m=+41.472286539" watchObservedRunningTime="2025-05-15 00:08:03.765473679 +0000 UTC m=+41.473604688" May 15 00:08:03.784589 kubelet[2749]: I0515 00:08:03.784289 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-b9gd2" podStartSLOduration=26.784265669 podStartE2EDuration="26.784265669s" podCreationTimestamp="2025-05-15 00:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:08:03.782639889 +0000 UTC m=+41.490770898" watchObservedRunningTime="2025-05-15 00:08:03.784265669 +0000 UTC m=+41.492396680" May 15 00:08:04.248337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3785729399.mount: Deactivated successfully. May 15 00:08:04.444705 kubelet[2749]: I0515 00:08:04.444399 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:08:20.581796 systemd[1]: Started sshd@13-10.128.0.108:22-147.75.109.163:56472.service - OpenSSH per-connection server daemon (147.75.109.163:56472). May 15 00:08:20.872696 sshd[4124]: Accepted publickey for core from 147.75.109.163 port 56472 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:08:20.874752 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:20.881496 systemd-logind[1458]: New session 10 of user core. May 15 00:08:20.887270 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 00:08:21.196609 sshd[4126]: Connection closed by 147.75.109.163 port 56472 May 15 00:08:21.197460 sshd-session[4124]: pam_unix(sshd:session): session closed for user core May 15 00:08:21.201728 systemd[1]: sshd@13-10.128.0.108:22-147.75.109.163:56472.service: Deactivated successfully. May 15 00:08:21.205816 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:08:21.208341 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. May 15 00:08:21.210588 systemd-logind[1458]: Removed session 10. May 15 00:08:26.257532 systemd[1]: Started sshd@14-10.128.0.108:22-147.75.109.163:56484.service - OpenSSH per-connection server daemon (147.75.109.163:56484). May 15 00:08:26.547122 sshd[4141]: Accepted publickey for core from 147.75.109.163 port 56484 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:08:26.549000 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:26.555939 systemd-logind[1458]: New session 11 of user core. May 15 00:08:26.558377 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 00:08:26.844092 sshd[4143]: Connection closed by 147.75.109.163 port 56484 May 15 00:08:26.845107 sshd-session[4141]: pam_unix(sshd:session): session closed for user core May 15 00:08:26.849941 systemd[1]: sshd@14-10.128.0.108:22-147.75.109.163:56484.service: Deactivated successfully. May 15 00:08:26.852997 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:08:26.855130 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. May 15 00:08:26.856743 systemd-logind[1458]: Removed session 11. May 15 00:08:31.904517 systemd[1]: Started sshd@15-10.128.0.108:22-147.75.109.163:59704.service - OpenSSH per-connection server daemon (147.75.109.163:59704). May 15 00:08:32.206017 sshd[4156]: Accepted publickey for core from 147.75.109.163 port 59704 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:08:32.208280 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:32.215382 systemd-logind[1458]: New session 12 of user core. May 15 00:08:32.221406 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 00:08:32.502631 sshd[4158]: Connection closed by 147.75.109.163 port 59704 May 15 00:08:32.504350 sshd-session[4156]: pam_unix(sshd:session): session closed for user core May 15 00:08:32.511233 systemd[1]: sshd@15-10.128.0.108:22-147.75.109.163:59704.service: Deactivated successfully. May 15 00:08:32.514392 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:08:32.516653 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. May 15 00:08:32.518882 systemd-logind[1458]: Removed session 12. May 15 00:08:37.559468 systemd[1]: Started sshd@16-10.128.0.108:22-147.75.109.163:59712.service - OpenSSH per-connection server daemon (147.75.109.163:59712). May 15 00:08:37.862191 sshd[4172]: Accepted publickey for core from 147.75.109.163 port 59712 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:08:37.863978 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:37.870308 systemd-logind[1458]: New session 13 of user core. May 15 00:08:37.875258 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 00:08:38.153460 sshd[4174]: Connection closed by 147.75.109.163 port 59712 May 15 00:08:38.154002 sshd-session[4172]: pam_unix(sshd:session): session closed for user core May 15 00:08:38.160344 systemd[1]: sshd@16-10.128.0.108:22-147.75.109.163:59712.service: Deactivated successfully. May 15 00:08:38.163323 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:08:38.164834 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. May 15 00:08:38.166739 systemd-logind[1458]: Removed session 13. May 15 00:08:38.215502 systemd[1]: Started sshd@17-10.128.0.108:22-147.75.109.163:58544.service - OpenSSH per-connection server daemon (147.75.109.163:58544). May 15 00:08:38.506467 sshd[4187]: Accepted publickey for core from 147.75.109.163 port 58544 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:08:38.509245 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:38.516499 systemd-logind[1458]: New session 14 of user core. May 15 00:08:38.525291 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 00:08:38.840729 sshd[4191]: Connection closed by 147.75.109.163 port 58544 May 15 00:08:38.841613 sshd-session[4187]: pam_unix(sshd:session): session closed for user core May 15 00:08:38.847158 systemd[1]: sshd@17-10.128.0.108:22-147.75.109.163:58544.service: Deactivated successfully. May 15 00:08:38.849790 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:08:38.851268 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. May 15 00:08:38.852878 systemd-logind[1458]: Removed session 14. May 15 00:08:38.901480 systemd[1]: Started sshd@18-10.128.0.108:22-147.75.109.163:58552.service - OpenSSH per-connection server daemon (147.75.109.163:58552). May 15 00:08:39.199543 sshd[4201]: Accepted publickey for core from 147.75.109.163 port 58552 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:08:39.201512 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:39.209019 systemd-logind[1458]: New session 15 of user core. May 15 00:08:39.212345 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 00:08:39.491999 sshd[4203]: Connection closed by 147.75.109.163 port 58552 May 15 00:08:39.493264 sshd-session[4201]: pam_unix(sshd:session): session closed for user core May 15 00:08:39.497720 systemd[1]: sshd@18-10.128.0.108:22-147.75.109.163:58552.service: Deactivated successfully. May 15 00:08:39.500866 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:08:39.503649 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. May 15 00:08:39.505400 systemd-logind[1458]: Removed session 15. May 15 00:08:44.552548 systemd[1]: Started sshd@19-10.128.0.108:22-147.75.109.163:58568.service - OpenSSH per-connection server daemon (147.75.109.163:58568). May 15 00:08:44.846322 sshd[4216]: Accepted publickey for core from 147.75.109.163 port 58568 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:08:44.848251 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:44.855294 systemd-logind[1458]: New session 16 of user core. May 15 00:08:44.859305 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 00:08:45.137169 sshd[4218]: Connection closed by 147.75.109.163 port 58568 May 15 00:08:45.138467 sshd-session[4216]: pam_unix(sshd:session): session closed for user core May 15 00:08:45.144331 systemd[1]: sshd@19-10.128.0.108:22-147.75.109.163:58568.service: Deactivated successfully. May 15 00:08:45.146847 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:08:45.148099 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. May 15 00:08:45.149584 systemd-logind[1458]: Removed session 16. May 15 00:08:50.199576 systemd[1]: Started sshd@20-10.128.0.108:22-147.75.109.163:37176.service - OpenSSH per-connection server daemon (147.75.109.163:37176). May 15 00:08:50.502034 sshd[4231]: Accepted publickey for core from 147.75.109.163 port 37176 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:08:50.504340 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:50.512173 systemd-logind[1458]: New session 17 of user core. May 15 00:08:50.518334 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 00:08:50.793817 sshd[4233]: Connection closed by 147.75.109.163 port 37176 May 15 00:08:50.795133 sshd-session[4231]: pam_unix(sshd:session): session closed for user core May 15 00:08:50.800522 systemd[1]: sshd@20-10.128.0.108:22-147.75.109.163:37176.service: Deactivated successfully. May 15 00:08:50.803367 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:08:50.804677 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. May 15 00:08:50.806307 systemd-logind[1458]: Removed session 17. May 15 00:08:55.855546 systemd[1]: Started sshd@21-10.128.0.108:22-147.75.109.163:37192.service - OpenSSH per-connection server daemon (147.75.109.163:37192). May 15 00:08:56.155033 sshd[4247]: Accepted publickey for core from 147.75.109.163 port 37192 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:08:56.156897 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:56.164893 systemd-logind[1458]: New session 18 of user core. May 15 00:08:56.175392 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 00:08:56.451542 sshd[4249]: Connection closed by 147.75.109.163 port 37192 May 15 00:08:56.452695 sshd-session[4247]: pam_unix(sshd:session): session closed for user core May 15 00:08:56.458235 systemd[1]: sshd@21-10.128.0.108:22-147.75.109.163:37192.service: Deactivated successfully. May 15 00:08:56.461197 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:08:56.462444 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. May 15 00:08:56.464110 systemd-logind[1458]: Removed session 18. May 15 00:08:56.513510 systemd[1]: Started sshd@22-10.128.0.108:22-147.75.109.163:37206.service - OpenSSH per-connection server daemon (147.75.109.163:37206). May 15 00:08:56.805702 sshd[4261]: Accepted publickey for core from 147.75.109.163 port 37206 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:08:56.807279 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:56.813728 systemd-logind[1458]: New session 19 of user core. May 15 00:08:56.820313 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 00:08:57.212067 sshd[4263]: Connection closed by 147.75.109.163 port 37206 May 15 00:08:57.213124 sshd-session[4261]: pam_unix(sshd:session): session closed for user core May 15 00:08:57.218950 systemd[1]: sshd@22-10.128.0.108:22-147.75.109.163:37206.service: Deactivated successfully. May 15 00:08:57.221998 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:08:57.223320 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. May 15 00:08:57.224874 systemd-logind[1458]: Removed session 19. May 15 00:08:57.270494 systemd[1]: Started sshd@23-10.128.0.108:22-147.75.109.163:37222.service - OpenSSH per-connection server daemon (147.75.109.163:37222). May 15 00:08:57.558182 sshd[4273]: Accepted publickey for core from 147.75.109.163 port 37222 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:08:57.560343 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:57.567125 systemd-logind[1458]: New session 20 of user core. May 15 00:08:57.576275 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 00:08:59.410396 sshd[4275]: Connection closed by 147.75.109.163 port 37222 May 15 00:08:59.411439 sshd-session[4273]: pam_unix(sshd:session): session closed for user core May 15 00:08:59.416322 systemd[1]: sshd@23-10.128.0.108:22-147.75.109.163:37222.service: Deactivated successfully. May 15 00:08:59.419553 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:08:59.421938 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. May 15 00:08:59.423551 systemd-logind[1458]: Removed session 20. May 15 00:08:59.469454 systemd[1]: Started sshd@24-10.128.0.108:22-147.75.109.163:47402.service - OpenSSH per-connection server daemon (147.75.109.163:47402). May 15 00:08:59.775101 sshd[4292]: Accepted publickey for core from 147.75.109.163 port 47402 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:08:59.778776 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:08:59.785204 systemd-logind[1458]: New session 21 of user core. May 15 00:08:59.793319 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 00:09:00.209798 sshd[4294]: Connection closed by 147.75.109.163 port 47402 May 15 00:09:00.210686 sshd-session[4292]: pam_unix(sshd:session): session closed for user core May 15 00:09:00.216506 systemd[1]: sshd@24-10.128.0.108:22-147.75.109.163:47402.service: Deactivated successfully. May 15 00:09:00.219528 systemd[1]: session-21.scope: Deactivated successfully. May 15 00:09:00.221165 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. May 15 00:09:00.222817 systemd-logind[1458]: Removed session 21. May 15 00:09:00.268753 systemd[1]: Started sshd@25-10.128.0.108:22-147.75.109.163:47416.service - OpenSSH per-connection server daemon (147.75.109.163:47416). May 15 00:09:00.563935 sshd[4304]: Accepted publickey for core from 147.75.109.163 port 47416 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:09:00.565513 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:00.571997 systemd-logind[1458]: New session 22 of user core. May 15 00:09:00.578271 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 00:09:00.851703 sshd[4306]: Connection closed by 147.75.109.163 port 47416 May 15 00:09:00.852632 sshd-session[4304]: pam_unix(sshd:session): session closed for user core May 15 00:09:00.858613 systemd[1]: sshd@25-10.128.0.108:22-147.75.109.163:47416.service: Deactivated successfully. May 15 00:09:00.861701 systemd[1]: session-22.scope: Deactivated successfully. May 15 00:09:00.862927 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. May 15 00:09:00.864563 systemd-logind[1458]: Removed session 22. May 15 00:09:05.910511 systemd[1]: Started sshd@26-10.128.0.108:22-147.75.109.163:47424.service - OpenSSH per-connection server daemon (147.75.109.163:47424). May 15 00:09:06.202867 sshd[4321]: Accepted publickey for core from 147.75.109.163 port 47424 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:09:06.204825 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:06.212491 systemd-logind[1458]: New session 23 of user core. May 15 00:09:06.218332 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 00:09:06.492398 sshd[4323]: Connection closed by 147.75.109.163 port 47424 May 15 00:09:06.494216 sshd-session[4321]: pam_unix(sshd:session): session closed for user core May 15 00:09:06.499015 systemd[1]: sshd@26-10.128.0.108:22-147.75.109.163:47424.service: Deactivated successfully. May 15 00:09:06.502408 systemd[1]: session-23.scope: Deactivated successfully. May 15 00:09:06.505361 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. May 15 00:09:06.507869 systemd-logind[1458]: Removed session 23. May 15 00:09:11.549480 systemd[1]: Started sshd@27-10.128.0.108:22-147.75.109.163:53262.service - OpenSSH per-connection server daemon (147.75.109.163:53262). May 15 00:09:11.845244 sshd[4338]: Accepted publickey for core from 147.75.109.163 port 53262 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:09:11.847094 sshd-session[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:11.853786 systemd-logind[1458]: New session 24 of user core. May 15 00:09:11.863312 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 00:09:12.130179 sshd[4340]: Connection closed by 147.75.109.163 port 53262 May 15 00:09:12.131546 sshd-session[4338]: pam_unix(sshd:session): session closed for user core May 15 00:09:12.136739 systemd[1]: sshd@27-10.128.0.108:22-147.75.109.163:53262.service: Deactivated successfully. May 15 00:09:12.139882 systemd[1]: session-24.scope: Deactivated successfully. May 15 00:09:12.142305 systemd-logind[1458]: Session 24 logged out. Waiting for processes to exit. May 15 00:09:12.144132 systemd-logind[1458]: Removed session 24. May 15 00:09:17.184133 systemd[1]: Started sshd@28-10.128.0.108:22-147.75.109.163:53272.service - OpenSSH per-connection server daemon (147.75.109.163:53272). May 15 00:09:17.490337 sshd[4352]: Accepted publickey for core from 147.75.109.163 port 53272 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:09:17.492175 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:17.499865 systemd-logind[1458]: New session 25 of user core. May 15 00:09:17.504271 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 00:09:17.779797 sshd[4354]: Connection closed by 147.75.109.163 port 53272 May 15 00:09:17.780986 sshd-session[4352]: pam_unix(sshd:session): session closed for user core May 15 00:09:17.786662 systemd[1]: sshd@28-10.128.0.108:22-147.75.109.163:53272.service: Deactivated successfully. May 15 00:09:17.789409 systemd[1]: session-25.scope: Deactivated successfully. May 15 00:09:17.790838 systemd-logind[1458]: Session 25 logged out. Waiting for processes to exit. May 15 00:09:17.792410 systemd-logind[1458]: Removed session 25. May 15 00:09:17.837486 systemd[1]: Started sshd@29-10.128.0.108:22-147.75.109.163:53282.service - OpenSSH per-connection server daemon (147.75.109.163:53282). May 15 00:09:18.127323 sshd[4366]: Accepted publickey for core from 147.75.109.163 port 53282 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:09:18.129235 sshd-session[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:18.135397 systemd-logind[1458]: New session 26 of user core. May 15 00:09:18.144271 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 00:09:20.440668 containerd[1478]: time="2025-05-15T00:09:20.440391300Z" level=info msg="StopContainer for \"6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9\" with timeout 30 (s)" May 15 00:09:20.442366 containerd[1478]: time="2025-05-15T00:09:20.442112220Z" level=info msg="Stop container \"6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9\" with signal terminated" May 15 00:09:20.475515 systemd[1]: run-containerd-runc-k8s.io-3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212-runc.8TQWci.mount: Deactivated successfully. May 15 00:09:20.478091 systemd[1]: cri-containerd-6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9.scope: Deactivated successfully. May 15 00:09:20.498306 containerd[1478]: time="2025-05-15T00:09:20.497366312Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:09:20.517278 containerd[1478]: time="2025-05-15T00:09:20.517228858Z" level=info msg="StopContainer for \"3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212\" with timeout 2 (s)" May 15 00:09:20.517885 containerd[1478]: time="2025-05-15T00:09:20.517852103Z" level=info msg="Stop container \"3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212\" with signal terminated" May 15 00:09:20.525285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9-rootfs.mount: Deactivated successfully. May 15 00:09:20.536526 systemd-networkd[1387]: lxc_health: Link DOWN May 15 00:09:20.536539 systemd-networkd[1387]: lxc_health: Lost carrier May 15 00:09:20.553654 systemd[1]: cri-containerd-3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212.scope: Deactivated successfully. May 15 00:09:20.554126 systemd[1]: cri-containerd-3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212.scope: Consumed 9.744s CPU time, 127.5M memory peak, 136K read from disk, 13.3M written to disk. May 15 00:09:20.567354 containerd[1478]: time="2025-05-15T00:09:20.567259011Z" level=info msg="shim disconnected" id=6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9 namespace=k8s.io May 15 00:09:20.567354 containerd[1478]: time="2025-05-15T00:09:20.567335772Z" level=warning msg="cleaning up after shim disconnected" id=6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9 namespace=k8s.io May 15 00:09:20.567354 containerd[1478]: time="2025-05-15T00:09:20.567352240Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:20.604634 containerd[1478]: time="2025-05-15T00:09:20.604585267Z" level=info msg="StopContainer for \"6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9\" returns successfully" May 15 00:09:20.605490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212-rootfs.mount: Deactivated successfully. May 15 00:09:20.606953 containerd[1478]: time="2025-05-15T00:09:20.606723981Z" level=info msg="StopPodSandbox for \"aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b\"" May 15 00:09:20.606953 containerd[1478]: time="2025-05-15T00:09:20.606776290Z" level=info msg="Container to stop \"6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:09:20.614677 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b-shm.mount: Deactivated successfully. May 15 00:09:20.616855 containerd[1478]: time="2025-05-15T00:09:20.615829609Z" level=info msg="shim disconnected" id=3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212 namespace=k8s.io May 15 00:09:20.616855 containerd[1478]: time="2025-05-15T00:09:20.615902022Z" level=warning msg="cleaning up after shim disconnected" id=3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212 namespace=k8s.io May 15 00:09:20.617352 containerd[1478]: time="2025-05-15T00:09:20.617188599Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:20.626086 systemd[1]: cri-containerd-aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b.scope: Deactivated successfully. May 15 00:09:20.647269 containerd[1478]: time="2025-05-15T00:09:20.647163474Z" level=info msg="StopContainer for \"3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212\" returns successfully" May 15 00:09:20.648141 containerd[1478]: time="2025-05-15T00:09:20.648104491Z" level=info msg="StopPodSandbox for \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\"" May 15 00:09:20.648866 containerd[1478]: time="2025-05-15T00:09:20.648554240Z" level=info msg="Container to stop \"544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:09:20.648866 containerd[1478]: time="2025-05-15T00:09:20.648672851Z" level=info msg="Container to stop \"1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:09:20.648866 containerd[1478]: time="2025-05-15T00:09:20.648696558Z" level=info msg="Container to stop \"2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:09:20.648866 containerd[1478]: time="2025-05-15T00:09:20.648713053Z" level=info msg="Container to stop \"d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:09:20.648866 containerd[1478]: time="2025-05-15T00:09:20.648732340Z" level=info msg="Container to stop \"3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:09:20.659285 systemd[1]: cri-containerd-d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00.scope: Deactivated successfully. May 15 00:09:20.677491 containerd[1478]: time="2025-05-15T00:09:20.677281215Z" level=info msg="shim disconnected" id=aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b namespace=k8s.io May 15 00:09:20.677491 containerd[1478]: time="2025-05-15T00:09:20.677357400Z" level=warning msg="cleaning up after shim disconnected" id=aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b namespace=k8s.io May 15 00:09:20.677491 containerd[1478]: time="2025-05-15T00:09:20.677372970Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:20.702116 containerd[1478]: time="2025-05-15T00:09:20.701454208Z" level=info msg="shim disconnected" id=d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00 namespace=k8s.io May 15 00:09:20.702116 containerd[1478]: time="2025-05-15T00:09:20.701547886Z" level=warning msg="cleaning up after shim disconnected" id=d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00 namespace=k8s.io May 15 00:09:20.702116 containerd[1478]: time="2025-05-15T00:09:20.701561791Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:20.705905 containerd[1478]: time="2025-05-15T00:09:20.705453271Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:09:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 00:09:20.708021 containerd[1478]: time="2025-05-15T00:09:20.707981381Z" level=info msg="TearDown network for sandbox \"aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b\" successfully" May 15 00:09:20.708315 containerd[1478]: time="2025-05-15T00:09:20.708171046Z" level=info msg="StopPodSandbox for \"aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b\" returns successfully" May 15 00:09:20.729961 containerd[1478]: time="2025-05-15T00:09:20.729902576Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:09:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 00:09:20.731838 containerd[1478]: time="2025-05-15T00:09:20.731788453Z" level=info msg="TearDown network for sandbox \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\" successfully" May 15 00:09:20.731977 containerd[1478]: time="2025-05-15T00:09:20.731844561Z" level=info msg="StopPodSandbox for \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\" returns successfully" May 15 00:09:20.811281 kubelet[2749]: I0515 00:09:20.811215 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-host-proc-sys-kernel\") pod \"5e45a971-439f-4641-ad69-ac96cf87af9a\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " May 15 00:09:20.811281 kubelet[2749]: I0515 00:09:20.811289 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqvd4\" (UniqueName: \"kubernetes.io/projected/5e45a971-439f-4641-ad69-ac96cf87af9a-kube-api-access-gqvd4\") pod \"5e45a971-439f-4641-ad69-ac96cf87af9a\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " May 15 00:09:20.812804 kubelet[2749]: I0515 00:09:20.811318 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e45a971-439f-4641-ad69-ac96cf87af9a-clustermesh-secrets\") pod \"5e45a971-439f-4641-ad69-ac96cf87af9a\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " May 15 00:09:20.812804 kubelet[2749]: I0515 00:09:20.811342 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-bpf-maps\") pod \"5e45a971-439f-4641-ad69-ac96cf87af9a\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " May 15 00:09:20.812804 kubelet[2749]: I0515 00:09:20.811373 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d070d3e1-9867-4e0b-aeb1-929e35c0ae2b-cilium-config-path\") pod \"d070d3e1-9867-4e0b-aeb1-929e35c0ae2b\" (UID: \"d070d3e1-9867-4e0b-aeb1-929e35c0ae2b\") " May 15 00:09:20.812804 kubelet[2749]: I0515 00:09:20.811410 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e45a971-439f-4641-ad69-ac96cf87af9a-cilium-config-path\") pod \"5e45a971-439f-4641-ad69-ac96cf87af9a\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " May 15 00:09:20.812804 kubelet[2749]: I0515 00:09:20.811435 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-cilium-cgroup\") pod \"5e45a971-439f-4641-ad69-ac96cf87af9a\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " May 15 00:09:20.812804 kubelet[2749]: I0515 00:09:20.811459 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-cni-path\") pod \"5e45a971-439f-4641-ad69-ac96cf87af9a\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " May 15 00:09:20.813321 kubelet[2749]: I0515 00:09:20.811489 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-cilium-run\") pod \"5e45a971-439f-4641-ad69-ac96cf87af9a\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " May 15 00:09:20.813321 kubelet[2749]: I0515 00:09:20.811515 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-host-proc-sys-net\") pod \"5e45a971-439f-4641-ad69-ac96cf87af9a\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " May 15 00:09:20.813321 kubelet[2749]: I0515 00:09:20.811545 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jldw\" (UniqueName: \"kubernetes.io/projected/d070d3e1-9867-4e0b-aeb1-929e35c0ae2b-kube-api-access-8jldw\") pod \"d070d3e1-9867-4e0b-aeb1-929e35c0ae2b\" (UID: \"d070d3e1-9867-4e0b-aeb1-929e35c0ae2b\") " May 15 00:09:20.813321 kubelet[2749]: I0515 00:09:20.811570 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-lib-modules\") pod \"5e45a971-439f-4641-ad69-ac96cf87af9a\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " May 15 00:09:20.813321 kubelet[2749]: I0515 00:09:20.811594 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-etc-cni-netd\") pod \"5e45a971-439f-4641-ad69-ac96cf87af9a\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " May 15 00:09:20.813321 kubelet[2749]: I0515 00:09:20.811637 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-xtables-lock\") pod \"5e45a971-439f-4641-ad69-ac96cf87af9a\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " May 15 00:09:20.813630 kubelet[2749]: I0515 00:09:20.811666 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-hostproc\") pod \"5e45a971-439f-4641-ad69-ac96cf87af9a\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " May 15 00:09:20.813630 kubelet[2749]: I0515 00:09:20.811693 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e45a971-439f-4641-ad69-ac96cf87af9a-hubble-tls\") pod \"5e45a971-439f-4641-ad69-ac96cf87af9a\" (UID: \"5e45a971-439f-4641-ad69-ac96cf87af9a\") " May 15 00:09:20.813630 kubelet[2749]: I0515 00:09:20.812144 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-cni-path" (OuterVolumeSpecName: "cni-path") pod "5e45a971-439f-4641-ad69-ac96cf87af9a" (UID: "5e45a971-439f-4641-ad69-ac96cf87af9a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:20.813630 kubelet[2749]: I0515 00:09:20.812244 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5e45a971-439f-4641-ad69-ac96cf87af9a" (UID: "5e45a971-439f-4641-ad69-ac96cf87af9a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:20.815396 kubelet[2749]: I0515 00:09:20.813822 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5e45a971-439f-4641-ad69-ac96cf87af9a" (UID: "5e45a971-439f-4641-ad69-ac96cf87af9a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:20.815592 kubelet[2749]: I0515 00:09:20.815565 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5e45a971-439f-4641-ad69-ac96cf87af9a" (UID: "5e45a971-439f-4641-ad69-ac96cf87af9a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:20.816873 kubelet[2749]: I0515 00:09:20.816835 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5e45a971-439f-4641-ad69-ac96cf87af9a" (UID: "5e45a971-439f-4641-ad69-ac96cf87af9a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:20.816873 kubelet[2749]: I0515 00:09:20.816891 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5e45a971-439f-4641-ad69-ac96cf87af9a" (UID: "5e45a971-439f-4641-ad69-ac96cf87af9a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:20.817094 kubelet[2749]: I0515 00:09:20.816918 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5e45a971-439f-4641-ad69-ac96cf87af9a" (UID: "5e45a971-439f-4641-ad69-ac96cf87af9a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:20.817094 kubelet[2749]: I0515 00:09:20.816941 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-hostproc" (OuterVolumeSpecName: "hostproc") pod "5e45a971-439f-4641-ad69-ac96cf87af9a" (UID: "5e45a971-439f-4641-ad69-ac96cf87af9a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:20.817492 kubelet[2749]: I0515 00:09:20.817452 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5e45a971-439f-4641-ad69-ac96cf87af9a" (UID: "5e45a971-439f-4641-ad69-ac96cf87af9a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:20.818312 kubelet[2749]: I0515 00:09:20.818268 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e45a971-439f-4641-ad69-ac96cf87af9a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5e45a971-439f-4641-ad69-ac96cf87af9a" (UID: "5e45a971-439f-4641-ad69-ac96cf87af9a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:09:20.819307 kubelet[2749]: I0515 00:09:20.819243 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5e45a971-439f-4641-ad69-ac96cf87af9a" (UID: "5e45a971-439f-4641-ad69-ac96cf87af9a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:09:20.823106 kubelet[2749]: I0515 00:09:20.823072 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e45a971-439f-4641-ad69-ac96cf87af9a-kube-api-access-gqvd4" (OuterVolumeSpecName: "kube-api-access-gqvd4") pod "5e45a971-439f-4641-ad69-ac96cf87af9a" (UID: "5e45a971-439f-4641-ad69-ac96cf87af9a"). InnerVolumeSpecName "kube-api-access-gqvd4". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:09:20.825821 kubelet[2749]: I0515 00:09:20.825602 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d070d3e1-9867-4e0b-aeb1-929e35c0ae2b-kube-api-access-8jldw" (OuterVolumeSpecName: "kube-api-access-8jldw") pod "d070d3e1-9867-4e0b-aeb1-929e35c0ae2b" (UID: "d070d3e1-9867-4e0b-aeb1-929e35c0ae2b"). InnerVolumeSpecName "kube-api-access-8jldw". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:09:20.826385 kubelet[2749]: I0515 00:09:20.826236 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d070d3e1-9867-4e0b-aeb1-929e35c0ae2b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d070d3e1-9867-4e0b-aeb1-929e35c0ae2b" (UID: "d070d3e1-9867-4e0b-aeb1-929e35c0ae2b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 00:09:20.827300 kubelet[2749]: I0515 00:09:20.827215 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e45a971-439f-4641-ad69-ac96cf87af9a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5e45a971-439f-4641-ad69-ac96cf87af9a" (UID: "5e45a971-439f-4641-ad69-ac96cf87af9a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 00:09:20.827300 kubelet[2749]: I0515 00:09:20.827215 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e45a971-439f-4641-ad69-ac96cf87af9a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5e45a971-439f-4641-ad69-ac96cf87af9a" (UID: "5e45a971-439f-4641-ad69-ac96cf87af9a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 00:09:20.912800 kubelet[2749]: I0515 00:09:20.912726 2749 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-host-proc-sys-kernel\") on node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" DevicePath \"\"" May 15 00:09:20.912800 kubelet[2749]: I0515 00:09:20.912775 2749 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gqvd4\" (UniqueName: \"kubernetes.io/projected/5e45a971-439f-4641-ad69-ac96cf87af9a-kube-api-access-gqvd4\") on node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" DevicePath \"\"" May 15 00:09:20.912800 kubelet[2749]: I0515 00:09:20.912804 2749 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e45a971-439f-4641-ad69-ac96cf87af9a-clustermesh-secrets\") on node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" DevicePath \"\"" May 15 00:09:20.912800 kubelet[2749]: I0515 00:09:20.912836 2749 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-bpf-maps\") on node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" DevicePath \"\"" May 15 00:09:20.913195 kubelet[2749]: I0515 00:09:20.912856 2749 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d070d3e1-9867-4e0b-aeb1-929e35c0ae2b-cilium-config-path\") on node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" DevicePath \"\"" May 15 00:09:20.913195 kubelet[2749]: I0515 00:09:20.912879 2749 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e45a971-439f-4641-ad69-ac96cf87af9a-cilium-config-path\") on node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" DevicePath \"\"" May 15 00:09:20.913195 kubelet[2749]: I0515 00:09:20.912894 2749 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-cilium-cgroup\") on node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" DevicePath \"\"" May 15 00:09:20.913195 kubelet[2749]: I0515 00:09:20.912907 2749 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-cni-path\") on node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" DevicePath \"\"" May 15 00:09:20.913195 kubelet[2749]: I0515 00:09:20.912922 2749 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-cilium-run\") on node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" DevicePath \"\"" May 15 00:09:20.913195 kubelet[2749]: I0515 00:09:20.912935 2749 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-host-proc-sys-net\") on node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" DevicePath \"\"" May 15 00:09:20.913195 kubelet[2749]: I0515 00:09:20.912948 2749 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8jldw\" (UniqueName: \"kubernetes.io/projected/d070d3e1-9867-4e0b-aeb1-929e35c0ae2b-kube-api-access-8jldw\") on node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" DevicePath \"\"" May 15 00:09:20.913486 kubelet[2749]: I0515 00:09:20.912962 2749 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-etc-cni-netd\") on node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" DevicePath \"\"" May 15 00:09:20.913486 kubelet[2749]: I0515 00:09:20.912978 2749 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-xtables-lock\") on node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" DevicePath \"\"" May 15 00:09:20.913486 kubelet[2749]: I0515 00:09:20.912992 2749 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-lib-modules\") on node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" DevicePath \"\"" May 15 00:09:20.913486 kubelet[2749]: I0515 00:09:20.913009 2749 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e45a971-439f-4641-ad69-ac96cf87af9a-hostproc\") on node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" DevicePath \"\"" May 15 00:09:20.913486 kubelet[2749]: I0515 00:09:20.913026 2749 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e45a971-439f-4641-ad69-ac96cf87af9a-hubble-tls\") on node \"ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826\" DevicePath \"\"" May 15 00:09:20.929608 kubelet[2749]: I0515 00:09:20.929572 2749 scope.go:117] "RemoveContainer" containerID="6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9" May 15 00:09:20.935815 containerd[1478]: time="2025-05-15T00:09:20.935261140Z" level=info msg="RemoveContainer for \"6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9\"" May 15 00:09:20.940432 systemd[1]: Removed slice kubepods-besteffort-podd070d3e1_9867_4e0b_aeb1_929e35c0ae2b.slice - libcontainer container kubepods-besteffort-podd070d3e1_9867_4e0b_aeb1_929e35c0ae2b.slice. May 15 00:09:20.951364 containerd[1478]: time="2025-05-15T00:09:20.951252341Z" level=info msg="RemoveContainer for \"6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9\" returns successfully" May 15 00:09:20.955263 kubelet[2749]: I0515 00:09:20.952554 2749 scope.go:117] "RemoveContainer" containerID="6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9" May 15 00:09:20.955451 containerd[1478]: time="2025-05-15T00:09:20.955133344Z" level=error msg="ContainerStatus for \"6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9\": not found" May 15 00:09:20.952970 systemd[1]: Removed slice kubepods-burstable-pod5e45a971_439f_4641_ad69_ac96cf87af9a.slice - libcontainer container kubepods-burstable-pod5e45a971_439f_4641_ad69_ac96cf87af9a.slice. May 15 00:09:20.953172 systemd[1]: kubepods-burstable-pod5e45a971_439f_4641_ad69_ac96cf87af9a.slice: Consumed 9.867s CPU time, 127.9M memory peak, 136K read from disk, 13.3M written to disk. May 15 00:09:20.956958 kubelet[2749]: E0515 00:09:20.956270 2749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9\": not found" containerID="6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9" May 15 00:09:20.956958 kubelet[2749]: I0515 00:09:20.956317 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9"} err="failed to get container status \"6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f309d2f5beaf0a133cd20a6d3cd05df7dda5aaba2f63ffa4b404e938ca549a9\": not found" May 15 00:09:20.956958 kubelet[2749]: I0515 00:09:20.956442 2749 scope.go:117] "RemoveContainer" containerID="3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212" May 15 00:09:20.962365 containerd[1478]: time="2025-05-15T00:09:20.962083141Z" level=info msg="RemoveContainer for \"3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212\"" May 15 00:09:20.969918 containerd[1478]: time="2025-05-15T00:09:20.969489903Z" level=info msg="RemoveContainer for \"3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212\" returns successfully" May 15 00:09:20.970240 kubelet[2749]: I0515 00:09:20.970216 2749 scope.go:117] "RemoveContainer" containerID="2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9" May 15 00:09:20.973839 containerd[1478]: time="2025-05-15T00:09:20.973798780Z" level=info msg="RemoveContainer for \"2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9\"" May 15 00:09:20.985585 containerd[1478]: time="2025-05-15T00:09:20.985427859Z" level=info msg="RemoveContainer for \"2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9\" returns successfully" May 15 00:09:20.986214 kubelet[2749]: I0515 00:09:20.986188 2749 scope.go:117] "RemoveContainer" containerID="1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124" May 15 00:09:20.990422 containerd[1478]: time="2025-05-15T00:09:20.989943133Z" level=info msg="RemoveContainer for \"1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124\"" May 15 00:09:20.994656 containerd[1478]: time="2025-05-15T00:09:20.994592591Z" level=info msg="RemoveContainer for \"1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124\" returns successfully" May 15 00:09:20.994896 kubelet[2749]: I0515 00:09:20.994824 2749 scope.go:117] "RemoveContainer" containerID="d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3" May 15 00:09:20.996565 containerd[1478]: time="2025-05-15T00:09:20.996433156Z" level=info msg="RemoveContainer for \"d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3\"" May 15 00:09:21.000964 containerd[1478]: time="2025-05-15T00:09:21.000832194Z" level=info msg="RemoveContainer for \"d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3\" returns successfully" May 15 00:09:21.001347 kubelet[2749]: I0515 00:09:21.001144 2749 scope.go:117] "RemoveContainer" containerID="544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322" May 15 00:09:21.002728 containerd[1478]: time="2025-05-15T00:09:21.002697663Z" level=info msg="RemoveContainer for \"544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322\"" May 15 00:09:21.007358 containerd[1478]: time="2025-05-15T00:09:21.007309564Z" level=info msg="RemoveContainer for \"544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322\" returns successfully" May 15 00:09:21.007575 kubelet[2749]: I0515 00:09:21.007545 2749 scope.go:117] "RemoveContainer" containerID="3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212" May 15 00:09:21.008450 containerd[1478]: time="2025-05-15T00:09:21.008085895Z" level=error msg="ContainerStatus for \"3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212\": not found" May 15 00:09:21.008572 kubelet[2749]: E0515 00:09:21.008255 2749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212\": not found" containerID="3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212" May 15 00:09:21.008572 kubelet[2749]: I0515 00:09:21.008289 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212"} err="failed to get container status \"3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212\": rpc error: code = NotFound desc = an error occurred when try to find container \"3975882e54cd85fc75a7ad69db157128f3b349dae24dd18310fe5377f45a6212\": not found" May 15 00:09:21.008572 kubelet[2749]: I0515 00:09:21.008318 2749 scope.go:117] "RemoveContainer" containerID="2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9" May 15 00:09:21.008843 containerd[1478]: time="2025-05-15T00:09:21.008764590Z" level=error msg="ContainerStatus for \"2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9\": not found" May 15 00:09:21.008981 kubelet[2749]: E0515 00:09:21.008943 2749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9\": not found" containerID="2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9" May 15 00:09:21.009086 kubelet[2749]: I0515 00:09:21.008984 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9"} err="failed to get container status \"2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e2c4b469857ea36a3edb69e112adea68a8ff2565d7d8922fff64cf12e70f1b9\": not found" May 15 00:09:21.009086 kubelet[2749]: I0515 00:09:21.009012 2749 scope.go:117] "RemoveContainer" containerID="1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124" May 15 00:09:21.009676 containerd[1478]: time="2025-05-15T00:09:21.009506247Z" level=error msg="ContainerStatus for \"1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124\": not found" May 15 00:09:21.009871 kubelet[2749]: E0515 00:09:21.009728 2749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124\": not found" containerID="1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124" May 15 00:09:21.009871 kubelet[2749]: I0515 00:09:21.009765 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124"} err="failed to get container status \"1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d132033d81ca7049199dc70a5dd90c07405a1319c1bf4f3c790cca51b1f2124\": not found" May 15 00:09:21.009871 kubelet[2749]: I0515 00:09:21.009791 2749 scope.go:117] "RemoveContainer" containerID="d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3" May 15 00:09:21.010173 containerd[1478]: time="2025-05-15T00:09:21.010032407Z" level=error msg="ContainerStatus for \"d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3\": not found" May 15 00:09:21.010256 kubelet[2749]: E0515 00:09:21.010226 2749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3\": not found" containerID="d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3" May 15 00:09:21.010350 kubelet[2749]: I0515 00:09:21.010261 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3"} err="failed to get container status \"d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0fb8b07fa3cae9768c392f45545e2e1340d92b3700302bd5776652cdf7520f3\": not found" May 15 00:09:21.010350 kubelet[2749]: I0515 00:09:21.010286 2749 scope.go:117] "RemoveContainer" containerID="544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322" May 15 00:09:21.010728 containerd[1478]: time="2025-05-15T00:09:21.010688567Z" level=error msg="ContainerStatus for \"544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322\": not found" May 15 00:09:21.011047 kubelet[2749]: E0515 00:09:21.011013 2749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322\": not found" containerID="544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322" May 15 00:09:21.011159 kubelet[2749]: I0515 00:09:21.011070 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322"} err="failed to get container status \"544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322\": rpc error: code = NotFound desc = an error occurred when try to find container \"544b3e7f4e72ca691a9e4fc7933124ef960dd6f5347720fd19ef023b8418b322\": not found" May 15 00:09:21.459786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00-rootfs.mount: Deactivated successfully. May 15 00:09:21.459977 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00-shm.mount: Deactivated successfully. May 15 00:09:21.460120 systemd[1]: var-lib-kubelet-pods-5e45a971\x2d439f\x2d4641\x2dad69\x2dac96cf87af9a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgqvd4.mount: Deactivated successfully. May 15 00:09:21.460247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b-rootfs.mount: Deactivated successfully. May 15 00:09:21.460357 systemd[1]: var-lib-kubelet-pods-d070d3e1\x2d9867\x2d4e0b\x2daeb1\x2d929e35c0ae2b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8jldw.mount: Deactivated successfully. May 15 00:09:21.460491 systemd[1]: var-lib-kubelet-pods-5e45a971\x2d439f\x2d4641\x2dad69\x2dac96cf87af9a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 00:09:21.460602 systemd[1]: var-lib-kubelet-pods-5e45a971\x2d439f\x2d4641\x2dad69\x2dac96cf87af9a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 00:09:22.416596 sshd[4368]: Connection closed by 147.75.109.163 port 53282 May 15 00:09:22.417630 sshd-session[4366]: pam_unix(sshd:session): session closed for user core May 15 00:09:22.422711 systemd[1]: sshd@29-10.128.0.108:22-147.75.109.163:53282.service: Deactivated successfully. May 15 00:09:22.425840 systemd[1]: session-26.scope: Deactivated successfully. May 15 00:09:22.426297 systemd[1]: session-26.scope: Consumed 1.553s CPU time, 23.9M memory peak. May 15 00:09:22.429221 systemd-logind[1458]: Session 26 logged out. Waiting for processes to exit. May 15 00:09:22.431543 systemd-logind[1458]: Removed session 26. May 15 00:09:22.477445 systemd[1]: Started sshd@30-10.128.0.108:22-147.75.109.163:51414.service - OpenSSH per-connection server daemon (147.75.109.163:51414). May 15 00:09:22.508494 kubelet[2749]: I0515 00:09:22.508440 2749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e45a971-439f-4641-ad69-ac96cf87af9a" path="/var/lib/kubelet/pods/5e45a971-439f-4641-ad69-ac96cf87af9a/volumes" May 15 00:09:22.510756 kubelet[2749]: I0515 00:09:22.510403 2749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d070d3e1-9867-4e0b-aeb1-929e35c0ae2b" path="/var/lib/kubelet/pods/d070d3e1-9867-4e0b-aeb1-929e35c0ae2b/volumes" May 15 00:09:22.541822 containerd[1478]: time="2025-05-15T00:09:22.541770398Z" level=info msg="StopPodSandbox for \"aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b\"" May 15 00:09:22.542575 containerd[1478]: time="2025-05-15T00:09:22.541894711Z" level=info msg="TearDown network for sandbox \"aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b\" successfully" May 15 00:09:22.542575 containerd[1478]: time="2025-05-15T00:09:22.541915772Z" level=info msg="StopPodSandbox for \"aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b\" returns successfully" May 15 00:09:22.542575 containerd[1478]: time="2025-05-15T00:09:22.542481846Z" level=info msg="RemovePodSandbox for \"aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b\"" May 15 00:09:22.542575 containerd[1478]: time="2025-05-15T00:09:22.542518234Z" level=info msg="Forcibly stopping sandbox \"aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b\"" May 15 00:09:22.542780 containerd[1478]: time="2025-05-15T00:09:22.542609905Z" level=info msg="TearDown network for sandbox \"aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b\" successfully" May 15 00:09:22.547848 containerd[1478]: time="2025-05-15T00:09:22.547798002Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:09:22.547973 containerd[1478]: time="2025-05-15T00:09:22.547871386Z" level=info msg="RemovePodSandbox \"aa0fd4a969b1e0e9bf1c03aca084272a2472052bbf0a85af0023b90f897f587b\" returns successfully" May 15 00:09:22.549064 containerd[1478]: time="2025-05-15T00:09:22.548765712Z" level=info msg="StopPodSandbox for \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\"" May 15 00:09:22.549064 containerd[1478]: time="2025-05-15T00:09:22.548867511Z" level=info msg="TearDown network for sandbox \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\" successfully" May 15 00:09:22.549064 containerd[1478]: time="2025-05-15T00:09:22.548886428Z" level=info msg="StopPodSandbox for \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\" returns successfully" May 15 00:09:22.551066 containerd[1478]: time="2025-05-15T00:09:22.549605916Z" level=info msg="RemovePodSandbox for \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\"" May 15 00:09:22.551066 containerd[1478]: time="2025-05-15T00:09:22.549643796Z" level=info msg="Forcibly stopping sandbox \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\"" May 15 00:09:22.551066 containerd[1478]: time="2025-05-15T00:09:22.549719588Z" level=info msg="TearDown network for sandbox \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\" successfully" May 15 00:09:22.554297 containerd[1478]: time="2025-05-15T00:09:22.554249160Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:09:22.554405 containerd[1478]: time="2025-05-15T00:09:22.554302338Z" level=info msg="RemovePodSandbox \"d8b65266cd7f39b1a661a28b9ab1663c35b617311621818bb283b18c32774d00\" returns successfully" May 15 00:09:22.700701 kubelet[2749]: E0515 00:09:22.700372 2749 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:09:22.792481 sshd[4531]: Accepted publickey for core from 147.75.109.163 port 51414 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:09:22.794517 sshd-session[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:22.801970 systemd-logind[1458]: New session 27 of user core. May 15 00:09:22.807317 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 00:09:23.055511 ntpd[1446]: Deleting interface #11 lxc_health, fe80::ec2a:87ff:fe60:27b0%8#123, interface stats: received=0, sent=0, dropped=0, active_time=81 secs May 15 00:09:23.058261 ntpd[1446]: 15 May 00:09:23 ntpd[1446]: Deleting interface #11 lxc_health, fe80::ec2a:87ff:fe60:27b0%8#123, interface stats: received=0, sent=0, dropped=0, active_time=81 secs May 15 00:09:23.468479 sshd[4535]: Connection closed by 147.75.109.163 port 51414 May 15 00:09:23.469473 sshd-session[4531]: pam_unix(sshd:session): session closed for user core May 15 00:09:23.479015 systemd[1]: sshd@30-10.128.0.108:22-147.75.109.163:51414.service: Deactivated successfully. May 15 00:09:23.487859 systemd[1]: session-27.scope: Deactivated successfully. May 15 00:09:23.490509 kubelet[2749]: I0515 00:09:23.490259 2749 topology_manager.go:215] "Topology Admit Handler" podUID="ddf9988e-d568-4395-becb-e6aead4991ec" podNamespace="kube-system" podName="cilium-mc967" May 15 00:09:23.490509 kubelet[2749]: E0515 00:09:23.490364 2749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d070d3e1-9867-4e0b-aeb1-929e35c0ae2b" containerName="cilium-operator" May 15 00:09:23.490509 kubelet[2749]: E0515 00:09:23.490382 2749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5e45a971-439f-4641-ad69-ac96cf87af9a" containerName="apply-sysctl-overwrites" May 15 00:09:23.490509 kubelet[2749]: E0515 00:09:23.490392 2749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5e45a971-439f-4641-ad69-ac96cf87af9a" containerName="mount-bpf-fs" May 15 00:09:23.490509 kubelet[2749]: E0515 00:09:23.490433 2749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5e45a971-439f-4641-ad69-ac96cf87af9a" containerName="clean-cilium-state" May 15 00:09:23.490509 kubelet[2749]: E0515 00:09:23.490447 2749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5e45a971-439f-4641-ad69-ac96cf87af9a" containerName="mount-cgroup" May 15 00:09:23.490509 kubelet[2749]: E0515 00:09:23.490457 2749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5e45a971-439f-4641-ad69-ac96cf87af9a" containerName="cilium-agent" May 15 00:09:23.494678 kubelet[2749]: I0515 00:09:23.493108 2749 memory_manager.go:354] "RemoveStaleState removing state" podUID="d070d3e1-9867-4e0b-aeb1-929e35c0ae2b" containerName="cilium-operator" May 15 00:09:23.494678 kubelet[2749]: I0515 00:09:23.493297 2749 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e45a971-439f-4641-ad69-ac96cf87af9a" containerName="cilium-agent" May 15 00:09:23.498035 systemd-logind[1458]: Session 27 logged out. Waiting for processes to exit. May 15 00:09:23.506081 systemd-logind[1458]: Removed session 27. May 15 00:09:23.533363 kubelet[2749]: I0515 00:09:23.532429 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ddf9988e-d568-4395-becb-e6aead4991ec-clustermesh-secrets\") pod \"cilium-mc967\" (UID: \"ddf9988e-d568-4395-becb-e6aead4991ec\") " pod="kube-system/cilium-mc967" May 15 00:09:23.533363 kubelet[2749]: I0515 00:09:23.532502 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ddf9988e-d568-4395-becb-e6aead4991ec-host-proc-sys-kernel\") pod \"cilium-mc967\" (UID: \"ddf9988e-d568-4395-becb-e6aead4991ec\") " pod="kube-system/cilium-mc967" May 15 00:09:23.533363 kubelet[2749]: I0515 00:09:23.532543 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ddf9988e-d568-4395-becb-e6aead4991ec-cilium-run\") pod \"cilium-mc967\" (UID: \"ddf9988e-d568-4395-becb-e6aead4991ec\") " pod="kube-system/cilium-mc967" May 15 00:09:23.533363 kubelet[2749]: I0515 00:09:23.532578 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ddf9988e-d568-4395-becb-e6aead4991ec-cilium-ipsec-secrets\") pod \"cilium-mc967\" (UID: \"ddf9988e-d568-4395-becb-e6aead4991ec\") " pod="kube-system/cilium-mc967" May 15 00:09:23.533363 kubelet[2749]: I0515 00:09:23.532613 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ddf9988e-d568-4395-becb-e6aead4991ec-host-proc-sys-net\") pod \"cilium-mc967\" (UID: \"ddf9988e-d568-4395-becb-e6aead4991ec\") " pod="kube-system/cilium-mc967" May 15 00:09:23.535255 kubelet[2749]: I0515 00:09:23.532647 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ddf9988e-d568-4395-becb-e6aead4991ec-hubble-tls\") pod \"cilium-mc967\" (UID: \"ddf9988e-d568-4395-becb-e6aead4991ec\") " pod="kube-system/cilium-mc967" May 15 00:09:23.535255 kubelet[2749]: I0515 00:09:23.534098 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ddf9988e-d568-4395-becb-e6aead4991ec-cni-path\") pod \"cilium-mc967\" (UID: \"ddf9988e-d568-4395-becb-e6aead4991ec\") " pod="kube-system/cilium-mc967" May 15 00:09:23.535255 kubelet[2749]: I0515 00:09:23.534149 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddf9988e-d568-4395-becb-e6aead4991ec-lib-modules\") pod \"cilium-mc967\" (UID: \"ddf9988e-d568-4395-becb-e6aead4991ec\") " pod="kube-system/cilium-mc967" May 15 00:09:23.535255 kubelet[2749]: I0515 00:09:23.534186 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ddf9988e-d568-4395-becb-e6aead4991ec-hostproc\") pod \"cilium-mc967\" (UID: \"ddf9988e-d568-4395-becb-e6aead4991ec\") " pod="kube-system/cilium-mc967" May 15 00:09:23.535255 kubelet[2749]: I0515 00:09:23.534211 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ddf9988e-d568-4395-becb-e6aead4991ec-cilium-cgroup\") pod \"cilium-mc967\" (UID: \"ddf9988e-d568-4395-becb-e6aead4991ec\") " pod="kube-system/cilium-mc967" May 15 00:09:23.535255 kubelet[2749]: I0515 00:09:23.534241 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ddf9988e-d568-4395-becb-e6aead4991ec-etc-cni-netd\") pod \"cilium-mc967\" (UID: \"ddf9988e-d568-4395-becb-e6aead4991ec\") " pod="kube-system/cilium-mc967" May 15 00:09:23.535621 kubelet[2749]: I0515 00:09:23.534276 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2h8g\" (UniqueName: \"kubernetes.io/projected/ddf9988e-d568-4395-becb-e6aead4991ec-kube-api-access-m2h8g\") pod \"cilium-mc967\" (UID: \"ddf9988e-d568-4395-becb-e6aead4991ec\") " pod="kube-system/cilium-mc967" May 15 00:09:23.535621 kubelet[2749]: I0515 00:09:23.534319 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ddf9988e-d568-4395-becb-e6aead4991ec-bpf-maps\") pod \"cilium-mc967\" (UID: \"ddf9988e-d568-4395-becb-e6aead4991ec\") " pod="kube-system/cilium-mc967" May 15 00:09:23.535621 kubelet[2749]: I0515 00:09:23.534355 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddf9988e-d568-4395-becb-e6aead4991ec-xtables-lock\") pod \"cilium-mc967\" (UID: \"ddf9988e-d568-4395-becb-e6aead4991ec\") " pod="kube-system/cilium-mc967" May 15 00:09:23.535621 kubelet[2749]: I0515 00:09:23.534383 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddf9988e-d568-4395-becb-e6aead4991ec-cilium-config-path\") pod \"cilium-mc967\" (UID: \"ddf9988e-d568-4395-becb-e6aead4991ec\") " pod="kube-system/cilium-mc967" May 15 00:09:23.551468 systemd[1]: Started sshd@31-10.128.0.108:22-147.75.109.163:51418.service - OpenSSH per-connection server daemon (147.75.109.163:51418). May 15 00:09:23.560035 systemd[1]: Created slice kubepods-burstable-podddf9988e_d568_4395_becb_e6aead4991ec.slice - libcontainer container kubepods-burstable-podddf9988e_d568_4395_becb_e6aead4991ec.slice. May 15 00:09:23.872850 containerd[1478]: time="2025-05-15T00:09:23.872751313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mc967,Uid:ddf9988e-d568-4395-becb-e6aead4991ec,Namespace:kube-system,Attempt:0,}" May 15 00:09:23.878675 sshd[4546]: Accepted publickey for core from 147.75.109.163 port 51418 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:09:23.881052 sshd-session[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:23.905918 systemd-logind[1458]: New session 28 of user core. May 15 00:09:23.908259 systemd[1]: Started session-28.scope - Session 28 of User core. May 15 00:09:23.912209 containerd[1478]: time="2025-05-15T00:09:23.912089057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:09:23.912607 containerd[1478]: time="2025-05-15T00:09:23.912276944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:09:23.913068 containerd[1478]: time="2025-05-15T00:09:23.912837491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:23.914344 containerd[1478]: time="2025-05-15T00:09:23.913159259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:23.940312 systemd[1]: Started cri-containerd-a8b9aa66301c3d0b39a9fc540a55df83d99bfc0f19459bfdca9f5187640b88e7.scope - libcontainer container a8b9aa66301c3d0b39a9fc540a55df83d99bfc0f19459bfdca9f5187640b88e7. May 15 00:09:23.972370 containerd[1478]: time="2025-05-15T00:09:23.972302853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mc967,Uid:ddf9988e-d568-4395-becb-e6aead4991ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8b9aa66301c3d0b39a9fc540a55df83d99bfc0f19459bfdca9f5187640b88e7\"" May 15 00:09:23.976718 containerd[1478]: time="2025-05-15T00:09:23.976671049Z" level=info msg="CreateContainer within sandbox \"a8b9aa66301c3d0b39a9fc540a55df83d99bfc0f19459bfdca9f5187640b88e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:09:23.994686 containerd[1478]: time="2025-05-15T00:09:23.994619272Z" level=info msg="CreateContainer within sandbox \"a8b9aa66301c3d0b39a9fc540a55df83d99bfc0f19459bfdca9f5187640b88e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"561d8c68c0e17f73447e5d61ecfcd50d79437731b116aa5dc0db3fd6c95d1a83\"" May 15 00:09:23.995841 containerd[1478]: time="2025-05-15T00:09:23.995807554Z" level=info msg="StartContainer for \"561d8c68c0e17f73447e5d61ecfcd50d79437731b116aa5dc0db3fd6c95d1a83\"" May 15 00:09:24.034320 systemd[1]: Started cri-containerd-561d8c68c0e17f73447e5d61ecfcd50d79437731b116aa5dc0db3fd6c95d1a83.scope - libcontainer container 561d8c68c0e17f73447e5d61ecfcd50d79437731b116aa5dc0db3fd6c95d1a83. May 15 00:09:24.072090 containerd[1478]: time="2025-05-15T00:09:24.071966305Z" level=info msg="StartContainer for \"561d8c68c0e17f73447e5d61ecfcd50d79437731b116aa5dc0db3fd6c95d1a83\" returns successfully" May 15 00:09:24.085997 systemd[1]: cri-containerd-561d8c68c0e17f73447e5d61ecfcd50d79437731b116aa5dc0db3fd6c95d1a83.scope: Deactivated successfully. May 15 00:09:24.094079 sshd[4571]: Connection closed by 147.75.109.163 port 51418 May 15 00:09:24.093818 sshd-session[4546]: pam_unix(sshd:session): session closed for user core May 15 00:09:24.102829 systemd[1]: sshd@31-10.128.0.108:22-147.75.109.163:51418.service: Deactivated successfully. May 15 00:09:24.107503 systemd[1]: session-28.scope: Deactivated successfully. May 15 00:09:24.109476 systemd-logind[1458]: Session 28 logged out. Waiting for processes to exit. May 15 00:09:24.112015 systemd-logind[1458]: Removed session 28. May 15 00:09:24.134433 containerd[1478]: time="2025-05-15T00:09:24.134153658Z" level=info msg="shim disconnected" id=561d8c68c0e17f73447e5d61ecfcd50d79437731b116aa5dc0db3fd6c95d1a83 namespace=k8s.io May 15 00:09:24.134757 containerd[1478]: time="2025-05-15T00:09:24.134723891Z" level=warning msg="cleaning up after shim disconnected" id=561d8c68c0e17f73447e5d61ecfcd50d79437731b116aa5dc0db3fd6c95d1a83 namespace=k8s.io May 15 00:09:24.134917 containerd[1478]: time="2025-05-15T00:09:24.134893860Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:24.154809 systemd[1]: Started sshd@32-10.128.0.108:22-147.75.109.163:51426.service - OpenSSH per-connection server daemon (147.75.109.163:51426). May 15 00:09:24.452336 sshd[4659]: Accepted publickey for core from 147.75.109.163 port 51426 ssh2: RSA SHA256:nB3rfAjpYEbSq8n6PQrtMylFAWgbUednLoUiXVQVfZs May 15 00:09:24.454441 sshd-session[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:24.461572 systemd-logind[1458]: New session 29 of user core. May 15 00:09:24.467262 systemd[1]: Started session-29.scope - Session 29 of User core. May 15 00:09:24.957304 containerd[1478]: time="2025-05-15T00:09:24.957001594Z" level=info msg="CreateContainer within sandbox \"a8b9aa66301c3d0b39a9fc540a55df83d99bfc0f19459bfdca9f5187640b88e7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:09:24.980836 containerd[1478]: time="2025-05-15T00:09:24.979428236Z" level=info msg="CreateContainer within sandbox \"a8b9aa66301c3d0b39a9fc540a55df83d99bfc0f19459bfdca9f5187640b88e7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d87b824c1c0581686c7c9ee880bdb8c3c0cd95d08f3164a9dacd00423172b40a\"" May 15 00:09:24.981290 containerd[1478]: time="2025-05-15T00:09:24.981256544Z" level=info msg="StartContainer for \"d87b824c1c0581686c7c9ee880bdb8c3c0cd95d08f3164a9dacd00423172b40a\"" May 15 00:09:24.983868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1014388242.mount: Deactivated successfully. May 15 00:09:25.036293 systemd[1]: Started cri-containerd-d87b824c1c0581686c7c9ee880bdb8c3c0cd95d08f3164a9dacd00423172b40a.scope - libcontainer container d87b824c1c0581686c7c9ee880bdb8c3c0cd95d08f3164a9dacd00423172b40a. May 15 00:09:25.076246 containerd[1478]: time="2025-05-15T00:09:25.075861918Z" level=info msg="StartContainer for \"d87b824c1c0581686c7c9ee880bdb8c3c0cd95d08f3164a9dacd00423172b40a\" returns successfully" May 15 00:09:25.084662 systemd[1]: cri-containerd-d87b824c1c0581686c7c9ee880bdb8c3c0cd95d08f3164a9dacd00423172b40a.scope: Deactivated successfully. May 15 00:09:25.117354 containerd[1478]: time="2025-05-15T00:09:25.117251076Z" level=info msg="shim disconnected" id=d87b824c1c0581686c7c9ee880bdb8c3c0cd95d08f3164a9dacd00423172b40a namespace=k8s.io May 15 00:09:25.117354 containerd[1478]: time="2025-05-15T00:09:25.117324520Z" level=warning msg="cleaning up after shim disconnected" id=d87b824c1c0581686c7c9ee880bdb8c3c0cd95d08f3164a9dacd00423172b40a namespace=k8s.io May 15 00:09:25.117354 containerd[1478]: time="2025-05-15T00:09:25.117352915Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:25.135567 containerd[1478]: time="2025-05-15T00:09:25.135493039Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:09:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 00:09:25.313427 kubelet[2749]: I0515 00:09:25.313255 2749 setters.go:580] "Node became not ready" node="ci-4230-1-1-nightly-20250514-2100-b88bb9971b9672b75826" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T00:09:25Z","lastTransitionTime":"2025-05-15T00:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 00:09:25.644777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d87b824c1c0581686c7c9ee880bdb8c3c0cd95d08f3164a9dacd00423172b40a-rootfs.mount: Deactivated successfully. May 15 00:09:25.961877 containerd[1478]: time="2025-05-15T00:09:25.961727799Z" level=info msg="CreateContainer within sandbox \"a8b9aa66301c3d0b39a9fc540a55df83d99bfc0f19459bfdca9f5187640b88e7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:09:25.986994 containerd[1478]: time="2025-05-15T00:09:25.986917390Z" level=info msg="CreateContainer within sandbox \"a8b9aa66301c3d0b39a9fc540a55df83d99bfc0f19459bfdca9f5187640b88e7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6a5c1ccc0335870bd953b13d8dc58d077c739f149c6241f7eb23ef41e688a24c\"" May 15 00:09:25.987877 containerd[1478]: time="2025-05-15T00:09:25.987575519Z" level=info msg="StartContainer for \"6a5c1ccc0335870bd953b13d8dc58d077c739f149c6241f7eb23ef41e688a24c\"" May 15 00:09:26.041331 systemd[1]: Started cri-containerd-6a5c1ccc0335870bd953b13d8dc58d077c739f149c6241f7eb23ef41e688a24c.scope - libcontainer container 6a5c1ccc0335870bd953b13d8dc58d077c739f149c6241f7eb23ef41e688a24c. May 15 00:09:26.089776 containerd[1478]: time="2025-05-15T00:09:26.088898090Z" level=info msg="StartContainer for \"6a5c1ccc0335870bd953b13d8dc58d077c739f149c6241f7eb23ef41e688a24c\" returns successfully" May 15 00:09:26.093259 systemd[1]: cri-containerd-6a5c1ccc0335870bd953b13d8dc58d077c739f149c6241f7eb23ef41e688a24c.scope: Deactivated successfully. May 15 00:09:26.127740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a5c1ccc0335870bd953b13d8dc58d077c739f149c6241f7eb23ef41e688a24c-rootfs.mount: Deactivated successfully. May 15 00:09:26.132378 containerd[1478]: time="2025-05-15T00:09:26.132303799Z" level=info msg="shim disconnected" id=6a5c1ccc0335870bd953b13d8dc58d077c739f149c6241f7eb23ef41e688a24c namespace=k8s.io May 15 00:09:26.132619 containerd[1478]: time="2025-05-15T00:09:26.132404246Z" level=warning msg="cleaning up after shim disconnected" id=6a5c1ccc0335870bd953b13d8dc58d077c739f149c6241f7eb23ef41e688a24c namespace=k8s.io May 15 00:09:26.132619 containerd[1478]: time="2025-05-15T00:09:26.132422742Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:26.504111 kubelet[2749]: E0515 00:09:26.503886 2749 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-b9gd2" podUID="f5c6f8c5-eb07-421b-8646-1c356992408a" May 15 00:09:26.967498 containerd[1478]: time="2025-05-15T00:09:26.967413354Z" level=info msg="CreateContainer within sandbox \"a8b9aa66301c3d0b39a9fc540a55df83d99bfc0f19459bfdca9f5187640b88e7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:09:27.001035 containerd[1478]: time="2025-05-15T00:09:27.000832475Z" level=info msg="CreateContainer within sandbox \"a8b9aa66301c3d0b39a9fc540a55df83d99bfc0f19459bfdca9f5187640b88e7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"404cb8ef2c7578ee5f8afe2aa018a657ba85764c32481a70b35ad4ed16ff141a\"" May 15 00:09:27.004137 containerd[1478]: time="2025-05-15T00:09:27.004088517Z" level=info msg="StartContainer for \"404cb8ef2c7578ee5f8afe2aa018a657ba85764c32481a70b35ad4ed16ff141a\"" May 15 00:09:27.007491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2229442454.mount: Deactivated successfully. May 15 00:09:27.056345 systemd[1]: Started cri-containerd-404cb8ef2c7578ee5f8afe2aa018a657ba85764c32481a70b35ad4ed16ff141a.scope - libcontainer container 404cb8ef2c7578ee5f8afe2aa018a657ba85764c32481a70b35ad4ed16ff141a. May 15 00:09:27.111221 containerd[1478]: time="2025-05-15T00:09:27.110982630Z" level=info msg="StartContainer for \"404cb8ef2c7578ee5f8afe2aa018a657ba85764c32481a70b35ad4ed16ff141a\" returns successfully" May 15 00:09:27.113777 systemd[1]: cri-containerd-404cb8ef2c7578ee5f8afe2aa018a657ba85764c32481a70b35ad4ed16ff141a.scope: Deactivated successfully. May 15 00:09:27.171264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-404cb8ef2c7578ee5f8afe2aa018a657ba85764c32481a70b35ad4ed16ff141a-rootfs.mount: Deactivated successfully. May 15 00:09:27.176991 containerd[1478]: time="2025-05-15T00:09:27.176664550Z" level=info msg="shim disconnected" id=404cb8ef2c7578ee5f8afe2aa018a657ba85764c32481a70b35ad4ed16ff141a namespace=k8s.io May 15 00:09:27.176991 containerd[1478]: time="2025-05-15T00:09:27.176735101Z" level=warning msg="cleaning up after shim disconnected" id=404cb8ef2c7578ee5f8afe2aa018a657ba85764c32481a70b35ad4ed16ff141a namespace=k8s.io May 15 00:09:27.176991 containerd[1478]: time="2025-05-15T00:09:27.176751611Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:27.223214 containerd[1478]: time="2025-05-15T00:09:27.222988149Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:09:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 00:09:27.702057 kubelet[2749]: E0515 00:09:27.701984 2749 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:09:27.973932 containerd[1478]: time="2025-05-15T00:09:27.973788236Z" level=info msg="CreateContainer within sandbox \"a8b9aa66301c3d0b39a9fc540a55df83d99bfc0f19459bfdca9f5187640b88e7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:09:28.004601 containerd[1478]: time="2025-05-15T00:09:28.004541286Z" level=info msg="CreateContainer within sandbox \"a8b9aa66301c3d0b39a9fc540a55df83d99bfc0f19459bfdca9f5187640b88e7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"185c6a3684f48b9d3a86657098f3a4c2d776eca64512fc65e85b2639ed030f3b\"" May 15 00:09:28.007150 containerd[1478]: time="2025-05-15T00:09:28.005642036Z" level=info msg="StartContainer for \"185c6a3684f48b9d3a86657098f3a4c2d776eca64512fc65e85b2639ed030f3b\"" May 15 00:09:28.055242 systemd[1]: Started cri-containerd-185c6a3684f48b9d3a86657098f3a4c2d776eca64512fc65e85b2639ed030f3b.scope - libcontainer container 185c6a3684f48b9d3a86657098f3a4c2d776eca64512fc65e85b2639ed030f3b. May 15 00:09:28.101397 containerd[1478]: time="2025-05-15T00:09:28.101248742Z" level=info msg="StartContainer for \"185c6a3684f48b9d3a86657098f3a4c2d776eca64512fc65e85b2639ed030f3b\" returns successfully" May 15 00:09:28.505094 kubelet[2749]: E0515 00:09:28.503578 2749 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-b9gd2" podUID="f5c6f8c5-eb07-421b-8646-1c356992408a" May 15 00:09:28.618935 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 15 00:09:29.001670 kubelet[2749]: I0515 00:09:29.001567 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mc967" podStartSLOduration=6.001538644 podStartE2EDuration="6.001538644s" podCreationTimestamp="2025-05-15 00:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:09:29.000965283 +0000 UTC m=+126.709096316" watchObservedRunningTime="2025-05-15 00:09:29.001538644 +0000 UTC m=+126.709669663" May 15 00:09:30.508843 kubelet[2749]: E0515 00:09:30.508394 2749 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-b9gd2" podUID="f5c6f8c5-eb07-421b-8646-1c356992408a" May 15 00:09:31.062706 kubelet[2749]: E0515 00:09:31.062654 2749 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53256->127.0.0.1:33801: write tcp 127.0.0.1:53256->127.0.0.1:33801: write: broken pipe May 15 00:09:32.055851 systemd-networkd[1387]: lxc_health: Link UP May 15 00:09:32.056448 systemd-networkd[1387]: lxc_health: Gained carrier May 15 00:09:32.505729 kubelet[2749]: E0515 00:09:32.505527 2749 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-b9gd2" podUID="f5c6f8c5-eb07-421b-8646-1c356992408a" May 15 00:09:33.218289 systemd[1]: run-containerd-runc-k8s.io-185c6a3684f48b9d3a86657098f3a4c2d776eca64512fc65e85b2639ed030f3b-runc.UWbqZa.mount: Deactivated successfully. May 15 00:09:33.531416 systemd-networkd[1387]: lxc_health: Gained IPv6LL May 15 00:09:35.529473 systemd[1]: run-containerd-runc-k8s.io-185c6a3684f48b9d3a86657098f3a4c2d776eca64512fc65e85b2639ed030f3b-runc.fyn6VT.mount: Deactivated successfully. May 15 00:09:36.054920 ntpd[1446]: Listen normally on 14 lxc_health [fe80::d0d0:d0ff:fe21:25de%14]:123 May 15 00:09:36.055929 ntpd[1446]: 15 May 00:09:36 ntpd[1446]: Listen normally on 14 lxc_health [fe80::d0d0:d0ff:fe21:25de%14]:123 May 15 00:09:37.977077 sshd[4666]: Connection closed by 147.75.109.163 port 51426 May 15 00:09:37.979876 sshd-session[4659]: pam_unix(sshd:session): session closed for user core May 15 00:09:37.990552 systemd[1]: sshd@32-10.128.0.108:22-147.75.109.163:51426.service: Deactivated successfully. May 15 00:09:37.998550 systemd[1]: session-29.scope: Deactivated successfully. May 15 00:09:38.005091 systemd-logind[1458]: Session 29 logged out. Waiting for processes to exit. May 15 00:09:38.009548 systemd-logind[1458]: Removed session 29.