Jun 20 19:10:22.088796 kernel: Linux version 6.6.94-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:12:40 -00 2025 Jun 20 19:10:22.088847 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 19:10:22.088866 kernel: BIOS-provided physical RAM map: Jun 20 19:10:22.088881 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jun 20 19:10:22.088895 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jun 20 19:10:22.088910 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jun 20 19:10:22.088928 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jun 20 19:10:22.088943 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jun 20 19:10:22.088963 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd32afff] usable Jun 20 19:10:22.088978 kernel: BIOS-e820: [mem 0x00000000bd32b000-0x00000000bd332fff] ACPI data Jun 20 19:10:22.088994 kernel: BIOS-e820: [mem 0x00000000bd333000-0x00000000bf8ecfff] usable Jun 20 19:10:22.089009 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jun 20 19:10:22.089021 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jun 20 19:10:22.089033 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jun 20 19:10:22.089056 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jun 20 19:10:22.089072 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jun 20 19:10:22.089088 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jun 20 19:10:22.089104 kernel: NX (Execute Disable) protection: active Jun 20 19:10:22.089121 kernel: APIC: Static calls initialized Jun 20 19:10:22.089138 kernel: efi: EFI v2.7 by EDK II Jun 20 19:10:22.089154 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32b018 Jun 20 19:10:22.089170 kernel: random: crng init done Jun 20 19:10:22.089187 kernel: secureboot: Secure boot disabled Jun 20 19:10:22.089203 kernel: SMBIOS 2.4 present. Jun 20 19:10:22.089226 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Jun 20 19:10:22.089243 kernel: Hypervisor detected: KVM Jun 20 19:10:22.089260 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 19:10:22.089276 kernel: kvm-clock: using sched offset of 13420714517 cycles Jun 20 19:10:22.089293 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 19:10:22.089336 kernel: tsc: Detected 2299.998 MHz processor Jun 20 19:10:22.089353 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:10:22.089370 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:10:22.089386 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jun 20 19:10:22.089402 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jun 20 19:10:22.089424 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:10:22.089439 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jun 20 19:10:22.089455 kernel: Using GB pages for direct mapping Jun 20 19:10:22.089471 kernel: ACPI: Early table checksum verification disabled Jun 20 19:10:22.089488 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jun 20 19:10:22.089505 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jun 20 19:10:22.089530 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jun 20 19:10:22.089551 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jun 20 19:10:22.089568 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jun 20 19:10:22.089586 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Jun 20 19:10:22.089604 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jun 20 19:10:22.089622 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jun 20 19:10:22.089640 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jun 20 19:10:22.089657 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jun 20 19:10:22.089679 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jun 20 19:10:22.089697 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jun 20 19:10:22.089714 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jun 20 19:10:22.089732 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jun 20 19:10:22.089750 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jun 20 19:10:22.089768 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jun 20 19:10:22.089785 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jun 20 19:10:22.089801 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jun 20 19:10:22.089819 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jun 20 19:10:22.089840 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jun 20 19:10:22.089858 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 20 19:10:22.089875 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 20 19:10:22.089893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 20 19:10:22.089911 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jun 20 19:10:22.089928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jun 20 19:10:22.089946 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jun 20 19:10:22.089964 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jun 20 19:10:22.089982 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jun 20 19:10:22.090003 kernel: Zone ranges: Jun 20 19:10:22.090021 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:10:22.090038 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 20 19:10:22.090055 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jun 20 19:10:22.090072 kernel: Movable zone start for each node Jun 20 19:10:22.090090 kernel: Early memory node ranges Jun 20 19:10:22.090107 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jun 20 19:10:22.090132 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jun 20 19:10:22.090150 kernel: node 0: [mem 0x0000000000100000-0x00000000bd32afff] Jun 20 19:10:22.090171 kernel: node 0: [mem 0x00000000bd333000-0x00000000bf8ecfff] Jun 20 19:10:22.090189 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jun 20 19:10:22.090207 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jun 20 19:10:22.090225 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jun 20 19:10:22.090242 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:10:22.090267 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jun 20 19:10:22.090285 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jun 20 19:10:22.090302 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Jun 20 19:10:22.090351 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jun 20 19:10:22.090372 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jun 20 19:10:22.090388 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 20 19:10:22.090527 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 19:10:22.090548 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:10:22.090563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 19:10:22.090581 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:10:22.090599 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 19:10:22.090616 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 19:10:22.090633 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:10:22.090656 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 20 19:10:22.090810 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jun 20 19:10:22.090832 kernel: Booting paravirtualized kernel on KVM Jun 20 19:10:22.090851 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:10:22.090869 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 19:10:22.090887 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jun 20 19:10:22.091073 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jun 20 19:10:22.091091 kernel: pcpu-alloc: [0] 0 1 Jun 20 19:10:22.091255 kernel: kvm-guest: PV spinlocks enabled Jun 20 19:10:22.091279 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 19:10:22.091300 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 19:10:22.091474 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:10:22.091492 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 20 19:10:22.091509 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:10:22.091525 kernel: Fallback order for Node 0: 0 Jun 20 19:10:22.091542 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Jun 20 19:10:22.091678 kernel: Policy zone: Normal Jun 20 19:10:22.091703 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:10:22.091720 kernel: software IO TLB: area num 2. Jun 20 19:10:22.091738 kernel: Memory: 7511328K/7860552K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43488K init, 1588K bss, 348968K reserved, 0K cma-reserved) Jun 20 19:10:22.091757 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:10:22.091775 kernel: Kernel/User page tables isolation: enabled Jun 20 19:10:22.091792 kernel: ftrace: allocating 37938 entries in 149 pages Jun 20 19:10:22.091809 kernel: ftrace: allocated 149 pages with 4 groups Jun 20 19:10:22.091826 kernel: Dynamic Preempt: voluntary Jun 20 19:10:22.091861 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:10:22.091881 kernel: rcu: RCU event tracing is enabled. Jun 20 19:10:22.091899 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:10:22.091918 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:10:22.091941 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:10:22.091959 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:10:22.091978 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:10:22.091996 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:10:22.092015 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 20 19:10:22.092038 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:10:22.092056 kernel: Console: colour dummy device 80x25 Jun 20 19:10:22.092074 kernel: printk: console [ttyS0] enabled Jun 20 19:10:22.092093 kernel: ACPI: Core revision 20230628 Jun 20 19:10:22.092112 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:10:22.092131 kernel: x2apic enabled Jun 20 19:10:22.092149 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:10:22.092168 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jun 20 19:10:22.092187 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jun 20 19:10:22.092210 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jun 20 19:10:22.092230 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jun 20 19:10:22.092249 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jun 20 19:10:22.092268 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:10:22.092287 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jun 20 19:10:22.092315 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jun 20 19:10:22.092359 kernel: Spectre V2 : Mitigation: IBRS Jun 20 19:10:22.092377 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:10:22.092397 kernel: RETBleed: Mitigation: IBRS Jun 20 19:10:22.092419 kernel: Spectre V2 : User space: Vulnerable Jun 20 19:10:22.092438 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 20 19:10:22.092457 kernel: MDS: Mitigation: Clear CPU buffers Jun 20 19:10:22.092476 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 20 19:10:22.092494 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 19:10:22.092513 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:10:22.092532 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:10:22.092551 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:10:22.092569 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:10:22.092591 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jun 20 19:10:22.092610 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:10:22.092629 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:10:22.092648 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jun 20 19:10:22.092667 kernel: landlock: Up and running. Jun 20 19:10:22.092685 kernel: SELinux: Initializing. Jun 20 19:10:22.092704 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:10:22.092723 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:10:22.092742 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jun 20 19:10:22.092765 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:10:22.092784 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:10:22.092803 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:10:22.092822 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jun 20 19:10:22.092841 kernel: signal: max sigframe size: 1776 Jun 20 19:10:22.092860 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:10:22.092878 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:10:22.092897 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 19:10:22.092919 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:10:22.092938 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:10:22.092957 kernel: .... node #0, CPUs: #1 Jun 20 19:10:22.092977 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jun 20 19:10:22.092997 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 20 19:10:22.093016 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:10:22.093034 kernel: smpboot: Max logical packages: 1 Jun 20 19:10:22.093053 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jun 20 19:10:22.093072 kernel: devtmpfs: initialized Jun 20 19:10:22.093095 kernel: x86/mm: Memory block size: 128MB Jun 20 19:10:22.093113 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jun 20 19:10:22.093132 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:10:22.093151 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:10:22.093170 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:10:22.093189 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:10:22.093208 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:10:22.093227 kernel: audit: type=2000 audit(1750446620.568:1): state=initialized audit_enabled=0 res=1 Jun 20 19:10:22.093245 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:10:22.093268 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:10:22.093287 kernel: cpuidle: using governor menu Jun 20 19:10:22.093312 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:10:22.093354 kernel: dca service started, version 1.12.1 Jun 20 19:10:22.093373 kernel: PCI: Using configuration type 1 for base access Jun 20 19:10:22.093392 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:10:22.093412 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:10:22.093430 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:10:22.093448 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:10:22.093472 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:10:22.093491 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:10:22.093510 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:10:22.093528 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:10:22.093547 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jun 20 19:10:22.093564 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 20 19:10:22.093583 kernel: ACPI: Interpreter enabled Jun 20 19:10:22.093602 kernel: ACPI: PM: (supports S0 S3 S5) Jun 20 19:10:22.093621 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:10:22.093645 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:10:22.093664 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 20 19:10:22.093682 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jun 20 19:10:22.093701 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 19:10:22.093965 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:10:22.094167 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 20 19:10:22.096410 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 20 19:10:22.096449 kernel: PCI host bridge to bus 0000:00 Jun 20 19:10:22.096647 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 19:10:22.096823 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 19:10:22.096993 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 19:10:22.097163 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jun 20 19:10:22.097364 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 19:10:22.097574 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 20 19:10:22.097783 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jun 20 19:10:22.097983 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 20 19:10:22.098173 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 20 19:10:22.098434 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jun 20 19:10:22.098640 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jun 20 19:10:22.098836 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jun 20 19:10:22.099050 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jun 20 19:10:22.099254 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jun 20 19:10:22.099493 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jun 20 19:10:22.099700 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jun 20 19:10:22.099898 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jun 20 19:10:22.100090 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jun 20 19:10:22.100115 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 19:10:22.100141 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 19:10:22.100160 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 19:10:22.100179 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 19:10:22.100198 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 20 19:10:22.100217 kernel: iommu: Default domain type: Translated Jun 20 19:10:22.100236 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:10:22.100262 kernel: efivars: Registered efivars operations Jun 20 19:10:22.100281 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:10:22.100299 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 19:10:22.101521 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jun 20 19:10:22.101544 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jun 20 19:10:22.101563 kernel: e820: reserve RAM buffer [mem 0xbd32b000-0xbfffffff] Jun 20 19:10:22.101582 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jun 20 19:10:22.101601 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jun 20 19:10:22.101620 kernel: vgaarb: loaded Jun 20 19:10:22.101639 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 19:10:22.101658 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:10:22.101677 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:10:22.101703 kernel: pnp: PnP ACPI init Jun 20 19:10:22.101722 kernel: pnp: PnP ACPI: found 7 devices Jun 20 19:10:22.101741 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:10:22.101760 kernel: NET: Registered PF_INET protocol family Jun 20 19:10:22.101780 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 19:10:22.101800 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 20 19:10:22.101819 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:10:22.101839 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:10:22.101857 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 20 19:10:22.101881 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 20 19:10:22.101901 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 19:10:22.101920 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 19:10:22.101940 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:10:22.101959 kernel: NET: Registered PF_XDP protocol family Jun 20 19:10:22.102157 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 19:10:22.102415 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 19:10:22.102592 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 19:10:22.102769 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jun 20 19:10:22.102964 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 20 19:10:22.102991 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:10:22.103010 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 20 19:10:22.103028 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jun 20 19:10:22.103047 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 20 19:10:22.103065 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jun 20 19:10:22.103084 kernel: clocksource: Switched to clocksource tsc Jun 20 19:10:22.103116 kernel: Initialise system trusted keyrings Jun 20 19:10:22.103141 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 20 19:10:22.103161 kernel: Key type asymmetric registered Jun 20 19:10:22.103178 kernel: Asymmetric key parser 'x509' registered Jun 20 19:10:22.103197 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 20 19:10:22.103215 kernel: io scheduler mq-deadline registered Jun 20 19:10:22.103233 kernel: io scheduler kyber registered Jun 20 19:10:22.103251 kernel: io scheduler bfq registered Jun 20 19:10:22.103270 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:10:22.103294 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 20 19:10:22.103888 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jun 20 19:10:22.104037 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jun 20 19:10:22.104509 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jun 20 19:10:22.104535 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 20 19:10:22.104839 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jun 20 19:10:22.104862 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:10:22.104881 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:10:22.104900 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 20 19:10:22.104925 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jun 20 19:10:22.104943 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jun 20 19:10:22.105132 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jun 20 19:10:22.105157 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 19:10:22.105175 kernel: i8042: Warning: Keylock active Jun 20 19:10:22.105193 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 19:10:22.105211 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 19:10:22.105640 kernel: rtc_cmos 00:00: RTC can wake from S4 Jun 20 19:10:22.105832 kernel: rtc_cmos 00:00: registered as rtc0 Jun 20 19:10:22.106010 kernel: rtc_cmos 00:00: setting system clock to 2025-06-20T19:10:21 UTC (1750446621) Jun 20 19:10:22.106181 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jun 20 19:10:22.106204 kernel: intel_pstate: CPU model not supported Jun 20 19:10:22.106223 kernel: pstore: Using crash dump compression: deflate Jun 20 19:10:22.106241 kernel: pstore: Registered efi_pstore as persistent store backend Jun 20 19:10:22.106260 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:10:22.106283 kernel: Segment Routing with IPv6 Jun 20 19:10:22.106302 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:10:22.106355 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:10:22.106374 kernel: Key type dns_resolver registered Jun 20 19:10:22.106392 kernel: IPI shorthand broadcast: enabled Jun 20 19:10:22.106410 kernel: sched_clock: Marking stable (875042109, 144178066)->(1045889162, -26668987) Jun 20 19:10:22.106429 kernel: registered taskstats version 1 Jun 20 19:10:22.106448 kernel: Loading compiled-in X.509 certificates Jun 20 19:10:22.106466 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.94-flatcar: 583832681762bbd3c2cbcca308896cbba88c4497' Jun 20 19:10:22.106484 kernel: Key type .fscrypt registered Jun 20 19:10:22.106507 kernel: Key type fscrypt-provisioning registered Jun 20 19:10:22.106526 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:10:22.106544 kernel: ima: No architecture policies found Jun 20 19:10:22.106562 kernel: clk: Disabling unused clocks Jun 20 19:10:22.106581 kernel: Freeing unused kernel image (initmem) memory: 43488K Jun 20 19:10:22.106599 kernel: Write protecting the kernel read-only data: 38912k Jun 20 19:10:22.106618 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jun 20 19:10:22.106636 kernel: Run /init as init process Jun 20 19:10:22.106658 kernel: with arguments: Jun 20 19:10:22.106676 kernel: /init Jun 20 19:10:22.106693 kernel: with environment: Jun 20 19:10:22.106711 kernel: HOME=/ Jun 20 19:10:22.106729 kernel: TERM=linux Jun 20 19:10:22.106747 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:10:22.106766 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 20 19:10:22.106786 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:10:22.106813 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:10:22.106834 systemd[1]: Detected virtualization google. Jun 20 19:10:22.106852 systemd[1]: Detected architecture x86-64. Jun 20 19:10:22.106870 systemd[1]: Running in initrd. Jun 20 19:10:22.106889 systemd[1]: No hostname configured, using default hostname. Jun 20 19:10:22.106909 systemd[1]: Hostname set to . Jun 20 19:10:22.106928 systemd[1]: Initializing machine ID from random generator. Jun 20 19:10:22.106947 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:10:22.106970 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:10:22.106989 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:10:22.107010 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:10:22.107030 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:10:22.107049 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:10:22.107070 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:10:22.107091 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:10:22.107116 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:10:22.107158 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:10:22.107182 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:10:22.107202 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:10:22.107222 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:10:22.107242 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:10:22.107266 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:10:22.107286 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:10:22.107312 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:10:22.107356 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:10:22.107377 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:10:22.107397 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:10:22.107418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:10:22.107438 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:10:22.107462 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:10:22.107483 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:10:22.107503 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:10:22.107522 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:10:22.107542 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:10:22.107561 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:10:22.109096 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:10:22.109167 systemd-journald[184]: Collecting audit messages is disabled. Jun 20 19:10:22.109221 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:10:22.109243 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:10:22.109265 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:10:22.109291 systemd-journald[184]: Journal started Jun 20 19:10:22.109365 systemd-journald[184]: Runtime Journal (/run/log/journal/9812e73e746945f98dea962edb119154) is 8M, max 148.6M, 140.6M free. Jun 20 19:10:22.117202 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:10:22.115119 systemd-modules-load[186]: Inserted module 'overlay' Jun 20 19:10:22.117002 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:10:22.130585 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:10:22.141567 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:10:22.142844 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:10:22.156011 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:10:22.166443 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:10:22.168538 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:10:22.175125 systemd-modules-load[186]: Inserted module 'br_netfilter' Jun 20 19:10:22.175358 kernel: Bridge firewalling registered Jun 20 19:10:22.175709 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:10:22.177529 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:10:22.194564 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:10:22.195163 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:10:22.209916 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:10:22.214671 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:10:22.219550 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:10:22.233705 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:10:22.240546 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:10:22.273727 dracut-cmdline[220]: dracut-dracut-053 Jun 20 19:10:22.278527 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 19:10:22.283476 systemd-resolved[218]: Positive Trust Anchors: Jun 20 19:10:22.283488 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:10:22.283564 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:10:22.288253 systemd-resolved[218]: Defaulting to hostname 'linux'. Jun 20 19:10:22.290040 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:10:22.294232 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:10:22.382366 kernel: SCSI subsystem initialized Jun 20 19:10:22.394372 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:10:22.406364 kernel: iscsi: registered transport (tcp) Jun 20 19:10:22.431534 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:10:22.431627 kernel: QLogic iSCSI HBA Driver Jun 20 19:10:22.483462 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:10:22.494591 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:10:22.522848 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:10:22.522931 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:10:22.522959 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 20 19:10:22.568382 kernel: raid6: avx2x4 gen() 18187 MB/s Jun 20 19:10:22.585362 kernel: raid6: avx2x2 gen() 17931 MB/s Jun 20 19:10:22.602742 kernel: raid6: avx2x1 gen() 13907 MB/s Jun 20 19:10:22.602815 kernel: raid6: using algorithm avx2x4 gen() 18187 MB/s Jun 20 19:10:22.620840 kernel: raid6: .... xor() 6794 MB/s, rmw enabled Jun 20 19:10:22.620897 kernel: raid6: using avx2x2 recovery algorithm Jun 20 19:10:22.644372 kernel: xor: automatically using best checksumming function avx Jun 20 19:10:22.810367 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:10:22.823757 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:10:22.828543 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:10:22.871208 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jun 20 19:10:22.879528 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:10:22.909524 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:10:22.930552 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Jun 20 19:10:22.963000 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:10:22.968544 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:10:23.072967 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:10:23.112920 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:10:23.156083 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:10:23.169352 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:10:23.178891 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:10:23.198337 kernel: AVX2 version of gcm_enc/dec engaged. Jun 20 19:10:23.210676 kernel: AES CTR mode by8 optimization enabled Jun 20 19:10:23.210459 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:10:23.223474 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:10:23.263963 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:10:23.323089 kernel: scsi host0: Virtio SCSI HBA Jun 20 19:10:23.333356 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jun 20 19:10:23.351012 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:10:23.351101 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:10:23.369696 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:10:23.474639 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jun 20 19:10:23.474985 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jun 20 19:10:23.475261 kernel: sd 0:0:1:0: [sda] Write Protect is off Jun 20 19:10:23.475475 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jun 20 19:10:23.475633 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 19:10:23.475865 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 19:10:23.475994 kernel: GPT:17805311 != 25165823 Jun 20 19:10:23.476021 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 19:10:23.476040 kernel: GPT:17805311 != 25165823 Jun 20 19:10:23.476053 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 19:10:23.476068 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:10:23.476082 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jun 20 19:10:23.380419 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:10:23.380529 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:10:23.457914 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:10:23.495302 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:10:23.554477 kernel: BTRFS: device fsid 5ff786f3-14e2-4689-ad32-ff903cf13f91 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (454) Jun 20 19:10:23.554522 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (453) Jun 20 19:10:23.545596 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:10:23.574293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:10:23.615287 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jun 20 19:10:23.653590 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jun 20 19:10:23.664484 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jun 20 19:10:23.700848 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jun 20 19:10:23.713450 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jun 20 19:10:23.737527 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:10:23.755394 disk-uuid[542]: Primary Header is updated. Jun 20 19:10:23.755394 disk-uuid[542]: Secondary Entries is updated. Jun 20 19:10:23.755394 disk-uuid[542]: Secondary Header is updated. Jun 20 19:10:23.802529 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:10:23.774807 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:10:23.836904 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:10:24.818064 disk-uuid[543]: The operation has completed successfully. Jun 20 19:10:24.826526 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:10:24.913661 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:10:24.913830 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:10:24.976598 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:10:25.007603 sh[566]: Success Jun 20 19:10:25.032405 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 20 19:10:25.128292 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:10:25.136260 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:10:25.156488 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:10:25.207841 kernel: BTRFS info (device dm-0): first mount of filesystem 5ff786f3-14e2-4689-ad32-ff903cf13f91 Jun 20 19:10:25.207962 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:10:25.207989 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 20 19:10:25.217298 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 20 19:10:25.224148 kernel: BTRFS info (device dm-0): using free space tree Jun 20 19:10:25.259382 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 20 19:10:25.267471 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:10:25.268718 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:10:25.273600 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:10:25.338200 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:10:25.338343 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:10:25.338369 kernel: BTRFS info (device sda6): using free space tree Jun 20 19:10:25.343780 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:10:25.389551 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 19:10:25.389607 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 19:10:25.389633 kernel: BTRFS info (device sda6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:10:25.388432 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:10:25.413666 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:10:25.543634 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:10:25.572574 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:10:25.613770 ignition[646]: Ignition 2.20.0 Jun 20 19:10:25.613785 ignition[646]: Stage: fetch-offline Jun 20 19:10:25.616617 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:10:25.613838 ignition[646]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:10:25.639491 systemd-networkd[746]: lo: Link UP Jun 20 19:10:25.613855 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 20 19:10:25.639497 systemd-networkd[746]: lo: Gained carrier Jun 20 19:10:25.614002 ignition[646]: parsed url from cmdline: "" Jun 20 19:10:25.641509 systemd-networkd[746]: Enumeration completed Jun 20 19:10:25.614009 ignition[646]: no config URL provided Jun 20 19:10:25.642049 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:10:25.614019 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:10:25.642057 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:10:25.614032 ignition[646]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:10:25.643384 systemd-networkd[746]: eth0: Link UP Jun 20 19:10:25.614044 ignition[646]: failed to fetch config: resource requires networking Jun 20 19:10:25.643391 systemd-networkd[746]: eth0: Gained carrier Jun 20 19:10:25.614999 ignition[646]: Ignition finished successfully Jun 20 19:10:25.643403 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:10:25.774637 ignition[757]: Ignition 2.20.0 Jun 20 19:10:25.657443 systemd-networkd[746]: eth0: DHCPv4 address 10.128.0.9/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jun 20 19:10:25.774649 ignition[757]: Stage: fetch Jun 20 19:10:25.667679 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:10:25.774913 ignition[757]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:10:25.694136 systemd[1]: Reached target network.target - Network. Jun 20 19:10:25.774930 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 20 19:10:25.715751 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:10:25.775079 ignition[757]: parsed url from cmdline: "" Jun 20 19:10:25.787312 unknown[757]: fetched base config from "system" Jun 20 19:10:25.775087 ignition[757]: no config URL provided Jun 20 19:10:25.787494 unknown[757]: fetched base config from "system" Jun 20 19:10:25.775096 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:10:25.787502 unknown[757]: fetched user config from "gcp" Jun 20 19:10:25.775111 ignition[757]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:10:25.789713 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:10:25.775145 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jun 20 19:10:25.811567 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:10:25.780089 ignition[757]: GET result: OK Jun 20 19:10:25.848474 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:10:25.780158 ignition[757]: parsing config with SHA512: 92c21c6b1414ab84d4a2ecac8a44d59899b3d6d38ce507412fcca20499349835e9d0c76452a484a106c6a2a343cde6ce475a617e66fd07bd1cb950337fe73c6e Jun 20 19:10:25.864681 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:10:25.787882 ignition[757]: fetch: fetch complete Jun 20 19:10:25.897905 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:10:25.787889 ignition[757]: fetch: fetch passed Jun 20 19:10:25.908646 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:10:25.787943 ignition[757]: Ignition finished successfully Jun 20 19:10:25.927534 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:10:25.845978 ignition[763]: Ignition 2.20.0 Jun 20 19:10:25.947517 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:10:25.845986 ignition[763]: Stage: kargs Jun 20 19:10:25.964469 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:10:25.846252 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:10:25.979490 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:10:25.846266 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 20 19:10:26.001561 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:10:25.847201 ignition[763]: kargs: kargs passed Jun 20 19:10:25.847254 ignition[763]: Ignition finished successfully Jun 20 19:10:25.894954 ignition[768]: Ignition 2.20.0 Jun 20 19:10:25.894963 ignition[768]: Stage: disks Jun 20 19:10:25.895680 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:10:25.895695 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 20 19:10:25.896837 ignition[768]: disks: disks passed Jun 20 19:10:25.896891 ignition[768]: Ignition finished successfully Jun 20 19:10:26.054387 systemd-fsck[777]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jun 20 19:10:26.220473 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:10:26.253554 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:10:26.371392 kernel: EXT4-fs (sda9): mounted filesystem 943f8432-3dc9-4e22-b9bd-c29bf6a1f5e1 r/w with ordered data mode. Quota mode: none. Jun 20 19:10:26.372508 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:10:26.373381 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:10:26.404471 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:10:26.413453 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:10:26.439073 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 19:10:26.470302 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (785) Jun 20 19:10:26.470368 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:10:26.470394 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:10:26.439374 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:10:26.514526 kernel: BTRFS info (device sda6): using free space tree Jun 20 19:10:26.514581 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 19:10:26.514605 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 19:10:26.439426 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:10:26.527859 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:10:26.543755 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:10:26.566562 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:10:26.690982 initrd-setup-root[809]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:10:26.701493 initrd-setup-root[816]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:10:26.711479 initrd-setup-root[823]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:10:26.721513 initrd-setup-root[830]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:10:26.861107 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:10:26.866489 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:10:26.885527 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:10:26.918361 kernel: BTRFS info (device sda6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:10:26.923753 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:10:26.959366 ignition[897]: INFO : Ignition 2.20.0 Jun 20 19:10:26.959366 ignition[897]: INFO : Stage: mount Jun 20 19:10:26.959366 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:10:26.959366 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 20 19:10:27.015482 ignition[897]: INFO : mount: mount passed Jun 20 19:10:27.015482 ignition[897]: INFO : Ignition finished successfully Jun 20 19:10:26.962622 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:10:26.985666 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:10:27.010464 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:10:27.057864 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:10:27.106353 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (909) Jun 20 19:10:27.123795 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:10:27.123876 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:10:27.123901 kernel: BTRFS info (device sda6): using free space tree Jun 20 19:10:27.145448 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 19:10:27.145525 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 19:10:27.149083 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:10:27.189591 ignition[926]: INFO : Ignition 2.20.0 Jun 20 19:10:27.189591 ignition[926]: INFO : Stage: files Jun 20 19:10:27.205475 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:10:27.205475 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 20 19:10:27.205475 ignition[926]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:10:27.205475 ignition[926]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:10:27.205475 ignition[926]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:10:27.205475 ignition[926]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:10:27.205475 ignition[926]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:10:27.205475 ignition[926]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:10:27.203173 unknown[926]: wrote ssh authorized keys file for user: core Jun 20 19:10:27.306437 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 20 19:10:27.306437 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jun 20 19:10:27.364150 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:10:27.629557 systemd-networkd[746]: eth0: Gained IPv6LL Jun 20 19:10:30.458784 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 20 19:10:30.476533 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:10:30.476533 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 19:10:30.801754 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 19:10:30.943352 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:10:30.943352 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:10:30.943352 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:10:30.943352 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:10:30.943352 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:10:30.943352 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:10:31.036507 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:10:31.036507 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:10:31.036507 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:10:31.036507 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:10:31.036507 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:10:31.036507 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:10:31.036507 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:10:31.036507 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:10:31.036507 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jun 20 19:10:31.387093 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 19:10:31.709015 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 19:10:31.709015 ignition[926]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 19:10:31.748496 ignition[926]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:10:31.748496 ignition[926]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:10:31.748496 ignition[926]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 19:10:31.748496 ignition[926]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:10:31.748496 ignition[926]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:10:31.748496 ignition[926]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:10:31.748496 ignition[926]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:10:31.748496 ignition[926]: INFO : files: files passed Jun 20 19:10:31.748496 ignition[926]: INFO : Ignition finished successfully Jun 20 19:10:31.713167 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:10:31.744564 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:10:31.781575 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:10:31.793067 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:10:31.963521 initrd-setup-root-after-ignition[954]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:10:31.963521 initrd-setup-root-after-ignition[954]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:10:31.793189 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:10:32.001542 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:10:31.867011 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:10:31.883886 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:10:31.919550 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:10:31.993252 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:10:31.993424 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:10:32.012790 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:10:32.036620 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:10:32.057671 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:10:32.062546 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:10:32.131382 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:10:32.160519 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:10:32.196019 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:10:32.208644 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:10:32.229733 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:10:32.247721 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:10:32.247929 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:10:32.274690 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:10:32.295710 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:10:32.313617 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:10:32.331759 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:10:32.352650 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:10:32.374751 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:10:32.394650 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:10:32.415706 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:10:32.436703 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:10:32.456738 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:10:32.474577 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:10:32.474799 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:10:32.499689 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:10:32.519755 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:10:32.540608 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:10:32.540781 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:10:32.562602 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:10:32.562824 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:10:32.592713 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:10:32.592945 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:10:32.612791 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:10:32.612989 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:10:32.670578 ignition[979]: INFO : Ignition 2.20.0 Jun 20 19:10:32.670578 ignition[979]: INFO : Stage: umount Jun 20 19:10:32.670578 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:10:32.670578 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 20 19:10:32.670578 ignition[979]: INFO : umount: umount passed Jun 20 19:10:32.670578 ignition[979]: INFO : Ignition finished successfully Jun 20 19:10:32.639601 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:10:32.678462 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:10:32.678723 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:10:32.702561 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:10:32.734461 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:10:32.734740 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:10:32.752803 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:10:32.753001 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:10:32.785374 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:10:32.786716 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:10:32.786833 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:10:32.802128 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:10:32.802243 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:10:32.826774 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:10:32.826935 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:10:32.844700 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:10:32.844766 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:10:32.862684 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:10:32.862757 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:10:32.889669 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:10:32.889747 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:10:32.899723 systemd[1]: Stopped target network.target - Network. Jun 20 19:10:32.922556 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:10:32.922668 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:10:32.930679 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:10:32.948618 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:10:32.952399 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:10:32.963657 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:10:32.981638 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:10:32.996709 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:10:32.996771 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:10:33.014768 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:10:33.014824 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:10:33.041672 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:10:33.041764 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:10:33.059694 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:10:33.059774 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:10:33.067706 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:10:33.067781 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:10:33.084934 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:10:33.101863 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:10:33.129049 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:10:33.129183 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:10:33.139916 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:10:33.140190 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:10:33.140345 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:10:33.166115 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:10:33.167457 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:10:33.167542 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:10:33.179509 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:10:33.191667 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:10:33.191756 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:10:33.237650 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:10:33.237738 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:10:33.255720 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:10:33.255802 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:10:33.273539 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:10:33.273643 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:10:33.701468 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jun 20 19:10:33.292750 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:10:33.312804 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:10:33.312909 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:10:33.313477 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:10:33.313637 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:10:33.339614 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:10:33.339689 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:10:33.359580 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:10:33.359649 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:10:33.379534 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:10:33.379645 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:10:33.409476 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:10:33.409726 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:10:33.439500 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:10:33.439634 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:10:33.475537 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:10:33.479677 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:10:33.479758 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:10:33.527867 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:10:33.527964 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:10:33.539912 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:10:33.540109 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:10:33.540645 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:10:33.540760 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:10:33.566911 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:10:33.567030 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:10:33.588887 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:10:33.613502 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:10:33.643783 systemd[1]: Switching root. Jun 20 19:10:33.997445 systemd-journald[184]: Journal stopped Jun 20 19:10:36.662188 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:10:36.662247 kernel: SELinux: policy capability open_perms=1 Jun 20 19:10:36.662268 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:10:36.662285 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:10:36.662313 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:10:36.662348 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:10:36.662369 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:10:36.662388 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:10:36.662411 kernel: audit: type=1403 audit(1750446634.412:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:10:36.662433 systemd[1]: Successfully loaded SELinux policy in 93.896ms. Jun 20 19:10:36.662456 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.661ms. Jun 20 19:10:36.662478 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:10:36.662498 systemd[1]: Detected virtualization google. Jun 20 19:10:36.662517 systemd[1]: Detected architecture x86-64. Jun 20 19:10:36.662543 systemd[1]: Detected first boot. Jun 20 19:10:36.662566 systemd[1]: Initializing machine ID from random generator. Jun 20 19:10:36.662587 zram_generator::config[1023]: No configuration found. Jun 20 19:10:36.662610 kernel: Guest personality initialized and is inactive Jun 20 19:10:36.662629 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 19:10:36.662653 kernel: Initialized host personality Jun 20 19:10:36.662672 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:10:36.662692 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:10:36.662714 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:10:36.662736 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:10:36.662757 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:10:36.662778 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:10:36.662800 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:10:36.662823 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:10:36.662849 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:10:36.662872 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:10:36.662895 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:10:36.662917 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:10:36.662941 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:10:36.662963 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:10:36.662985 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:10:36.663011 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:10:36.663034 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:10:36.663056 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:10:36.663078 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:10:36.663101 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:10:36.663128 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:10:36.663151 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:10:36.663174 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:10:36.663200 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:10:36.663223 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:10:36.663245 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:10:36.663269 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:10:36.663291 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:10:36.663342 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:10:36.663367 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:10:36.663389 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:10:36.663418 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:10:36.663441 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:10:36.663466 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:10:36.663490 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:10:36.663518 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:10:36.663542 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:10:36.663565 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:10:36.663589 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:10:36.663612 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:10:36.663636 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:10:36.663659 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:10:36.663683 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:10:36.663710 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:10:36.663733 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:10:36.663756 systemd[1]: Reached target machines.target - Containers. Jun 20 19:10:36.663779 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:10:36.663802 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:10:36.663824 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:10:36.663847 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:10:36.663869 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:10:36.663891 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:10:36.663917 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:10:36.663940 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:10:36.663963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:10:36.663984 kernel: fuse: init (API version 7.39) Jun 20 19:10:36.664007 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:10:36.664029 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:10:36.664051 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:10:36.664077 kernel: ACPI: bus type drm_connector registered Jun 20 19:10:36.664099 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:10:36.664122 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:10:36.664143 kernel: loop: module loaded Jun 20 19:10:36.664166 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:10:36.664189 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:10:36.664212 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:10:36.664269 systemd-journald[1111]: Collecting audit messages is disabled. Jun 20 19:10:36.664349 systemd-journald[1111]: Journal started Jun 20 19:10:36.664393 systemd-journald[1111]: Runtime Journal (/run/log/journal/c733905a7eb04d409ce87ceaa3f74791) is 8M, max 148.6M, 140.6M free. Jun 20 19:10:35.421504 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:10:35.434366 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 19:10:35.434995 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:10:36.676402 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:10:36.702372 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:10:36.740366 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:10:36.767342 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:10:36.792128 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:10:36.792221 systemd[1]: Stopped verity-setup.service. Jun 20 19:10:36.816485 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:10:36.829374 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:10:36.840008 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:10:36.849687 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:10:36.859660 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:10:36.869683 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:10:36.879656 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:10:36.890623 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:10:36.900813 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:10:36.912967 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:10:36.924805 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:10:36.925087 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:10:36.936813 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:10:36.937113 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:10:36.948824 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:10:36.949082 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:10:36.958775 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:10:36.959049 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:10:36.970822 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:10:36.971101 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:10:36.980815 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:10:36.981075 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:10:36.990844 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:10:37.000837 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:10:37.012840 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:10:37.024914 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:10:37.036917 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:10:37.061652 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:10:37.083468 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:10:37.094805 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:10:37.104485 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:10:37.104573 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:10:37.115717 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:10:37.134563 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:10:37.146788 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:10:37.156629 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:10:37.163827 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:10:37.179043 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:10:37.190569 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:10:37.197512 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:10:37.211066 systemd-journald[1111]: Time spent on flushing to /var/log/journal/c733905a7eb04d409ce87ceaa3f74791 is 114.009ms for 943 entries. Jun 20 19:10:37.211066 systemd-journald[1111]: System Journal (/var/log/journal/c733905a7eb04d409ce87ceaa3f74791) is 8M, max 584.8M, 576.8M free. Jun 20 19:10:37.357581 systemd-journald[1111]: Received client request to flush runtime journal. Jun 20 19:10:37.357674 kernel: loop0: detected capacity change from 0 to 52152 Jun 20 19:10:37.207491 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:10:37.223601 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:10:37.252507 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:10:37.267580 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:10:37.282317 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 20 19:10:37.301742 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:10:37.315288 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:10:37.326851 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:10:37.346704 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:10:37.358057 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:10:37.368317 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:10:37.399220 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:10:37.418379 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:10:37.431113 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:10:37.443967 udevadm[1149]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 20 19:10:37.456738 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:10:37.471521 kernel: loop1: detected capacity change from 0 to 147912 Jun 20 19:10:37.484578 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:10:37.497091 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:10:37.499721 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:10:37.557891 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Jun 20 19:10:37.558629 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Jun 20 19:10:37.571142 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:10:37.582372 kernel: loop2: detected capacity change from 0 to 229808 Jun 20 19:10:37.727365 kernel: loop3: detected capacity change from 0 to 138176 Jun 20 19:10:37.841127 kernel: loop4: detected capacity change from 0 to 52152 Jun 20 19:10:37.883308 kernel: loop5: detected capacity change from 0 to 147912 Jun 20 19:10:37.940365 kernel: loop6: detected capacity change from 0 to 229808 Jun 20 19:10:37.989356 kernel: loop7: detected capacity change from 0 to 138176 Jun 20 19:10:38.056964 (sd-merge)[1172]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jun 20 19:10:38.058016 (sd-merge)[1172]: Merged extensions into '/usr'. Jun 20 19:10:38.067195 systemd[1]: Reload requested from client PID 1147 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:10:38.067217 systemd[1]: Reloading... Jun 20 19:10:38.220561 zram_generator::config[1196]: No configuration found. Jun 20 19:10:38.435356 ldconfig[1142]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:10:38.491093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:10:38.627763 systemd[1]: Reloading finished in 559 ms. Jun 20 19:10:38.649016 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:10:38.659990 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:10:38.671976 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:10:38.693872 systemd[1]: Starting ensure-sysext.service... Jun 20 19:10:38.702272 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:10:38.721778 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:10:38.743790 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:10:38.744927 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:10:38.746944 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:10:38.747698 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jun 20 19:10:38.747957 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jun 20 19:10:38.755565 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:10:38.755693 systemd-tmpfiles[1242]: Skipping /boot Jun 20 19:10:38.758405 systemd[1]: Reload requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:10:38.758437 systemd[1]: Reloading... Jun 20 19:10:38.778433 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:10:38.778622 systemd-tmpfiles[1242]: Skipping /boot Jun 20 19:10:38.839717 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Jun 20 19:10:38.889348 zram_generator::config[1271]: No configuration found. Jun 20 19:10:39.189359 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1286) Jun 20 19:10:39.233972 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 20 19:10:39.245357 kernel: ACPI: button: Power Button [PWRF] Jun 20 19:10:39.262357 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 20 19:10:39.276348 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jun 20 19:10:39.285355 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jun 20 19:10:39.290350 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:10:39.300392 kernel: ACPI: button: Sleep Button [SLPF] Jun 20 19:10:39.467455 kernel: EDAC MC: Ver: 3.0.0 Jun 20 19:10:39.474379 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:10:39.535724 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:10:39.536041 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jun 20 19:10:39.547808 systemd[1]: Reloading finished in 788 ms. Jun 20 19:10:39.562953 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:10:39.589545 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:10:39.621390 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 20 19:10:39.642636 systemd[1]: Finished ensure-sysext.service. Jun 20 19:10:39.684630 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jun 20 19:10:39.694501 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:10:39.706568 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:10:39.724659 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:10:39.738735 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:10:39.748608 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 20 19:10:39.751016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:10:39.779245 augenrules[1370]: No rules Jun 20 19:10:39.786570 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:10:39.793176 lvm[1361]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 19:10:39.802608 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:10:39.823156 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:10:39.840515 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 20 19:10:39.849599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:10:39.855870 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:10:39.867460 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:10:39.875063 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:10:39.895335 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:10:39.912585 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:10:39.924484 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:10:39.942563 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:10:39.969594 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:10:39.979500 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:10:39.984724 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:10:39.985076 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:10:39.994960 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:10:39.995567 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 20 19:10:39.995969 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:10:39.996256 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:10:39.996728 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:10:39.996996 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:10:39.997468 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:10:39.997739 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:10:39.998219 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:10:39.998527 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:10:40.004171 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:10:40.004723 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:10:40.016444 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 20 19:10:40.023511 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:10:40.028695 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 20 19:10:40.030697 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jun 20 19:10:40.030799 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:10:40.032405 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:10:40.035656 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:10:40.041678 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:10:40.041759 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:10:40.042657 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:10:40.060071 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 19:10:40.079437 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:10:40.124957 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 20 19:10:40.132883 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jun 20 19:10:40.152514 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:10:40.162773 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:10:40.269343 systemd-networkd[1381]: lo: Link UP Jun 20 19:10:40.269361 systemd-networkd[1381]: lo: Gained carrier Jun 20 19:10:40.271495 systemd-networkd[1381]: Enumeration completed Jun 20 19:10:40.271640 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:10:40.272352 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:10:40.272367 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:10:40.272987 systemd-networkd[1381]: eth0: Link UP Jun 20 19:10:40.272995 systemd-networkd[1381]: eth0: Gained carrier Jun 20 19:10:40.273021 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:10:40.279415 systemd-networkd[1381]: eth0: DHCPv4 address 10.128.0.9/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jun 20 19:10:40.284749 systemd-resolved[1382]: Positive Trust Anchors: Jun 20 19:10:40.284769 systemd-resolved[1382]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:10:40.284843 systemd-resolved[1382]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:10:40.288596 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:10:40.291897 systemd-resolved[1382]: Defaulting to hostname 'linux'. Jun 20 19:10:40.309207 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:10:40.320615 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:10:40.330716 systemd[1]: Reached target network.target - Network. Jun 20 19:10:40.339496 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:10:40.350503 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:10:40.361680 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:10:40.373507 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:10:40.384741 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:10:40.394650 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:10:40.406506 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:10:40.417478 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:10:40.417549 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:10:40.426496 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:10:40.437276 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:10:40.449257 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:10:40.462205 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:10:40.473709 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:10:40.484485 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:10:40.511400 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:10:40.522170 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:10:40.534740 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:10:40.545738 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:10:40.556410 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:10:40.566454 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:10:40.574577 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:10:40.574631 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:10:40.586466 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:10:40.601559 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:10:40.624544 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:10:40.646533 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:10:40.653123 jq[1435]: false Jun 20 19:10:40.662596 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:10:40.674461 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:10:40.683273 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:10:40.703305 systemd[1]: Started ntpd.service - Network Time Service. Jun 20 19:10:40.721768 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:10:40.725750 extend-filesystems[1436]: Found loop4 Jun 20 19:10:40.725750 extend-filesystems[1436]: Found loop5 Jun 20 19:10:40.725750 extend-filesystems[1436]: Found loop6 Jun 20 19:10:40.725750 extend-filesystems[1436]: Found loop7 Jun 20 19:10:40.725750 extend-filesystems[1436]: Found sda Jun 20 19:10:40.725750 extend-filesystems[1436]: Found sda1 Jun 20 19:10:40.725750 extend-filesystems[1436]: Found sda2 Jun 20 19:10:40.725750 extend-filesystems[1436]: Found sda3 Jun 20 19:10:40.725750 extend-filesystems[1436]: Found usr Jun 20 19:10:40.725750 extend-filesystems[1436]: Found sda4 Jun 20 19:10:40.725750 extend-filesystems[1436]: Found sda6 Jun 20 19:10:40.725750 extend-filesystems[1436]: Found sda7 Jun 20 19:10:40.725750 extend-filesystems[1436]: Found sda9 Jun 20 19:10:40.725750 extend-filesystems[1436]: Checking size of /dev/sda9 Jun 20 19:10:40.796237 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jun 20 19:10:40.727238 dbus-daemon[1434]: [system] SELinux support is enabled Jun 20 19:10:40.746556 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:10:40.798749 extend-filesystems[1436]: Resized partition /dev/sda9 Jun 20 19:10:40.879514 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jun 20 19:10:40.879599 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1276) Jun 20 19:10:40.747068 dbus-daemon[1434]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1381 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 20 19:10:40.796666 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:10:40.879936 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Jun 20 19:10:40.879936 extend-filesystems[1454]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jun 20 19:10:40.879936 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 2 Jun 20 19:10:40.879936 extend-filesystems[1454]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jun 20 19:10:40.937547 coreos-metadata[1433]: Jun 20 19:10:40.843 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jun 20 19:10:40.937547 coreos-metadata[1433]: Jun 20 19:10:40.846 INFO Fetch successful Jun 20 19:10:40.937547 coreos-metadata[1433]: Jun 20 19:10:40.846 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jun 20 19:10:40.937547 coreos-metadata[1433]: Jun 20 19:10:40.846 INFO Fetch successful Jun 20 19:10:40.937547 coreos-metadata[1433]: Jun 20 19:10:40.846 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jun 20 19:10:40.937547 coreos-metadata[1433]: Jun 20 19:10:40.847 INFO Fetch successful Jun 20 19:10:40.937547 coreos-metadata[1433]: Jun 20 19:10:40.847 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jun 20 19:10:40.937547 coreos-metadata[1433]: Jun 20 19:10:40.847 INFO Fetch successful Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 16:33:02 UTC 2025 (1): Starting Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: ---------------------------------------------------- Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: ntp-4 is maintained by Network Time Foundation, Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: corporation. Support and training for ntp-4 are Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: available at https://www.nwtime.org/support Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: ---------------------------------------------------- Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: proto: precision = 0.109 usec (-23) Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: basedate set to 2025-06-08 Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: gps base set to 2025-06-08 (week 2370) Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: Listen and drop on 0 v6wildcard [::]:123 Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: Listen normally on 2 lo 127.0.0.1:123 Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: Listen normally on 3 eth0 10.128.0.9:123 Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: Listen normally on 4 lo [::1]:123 Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: bind(21) AF_INET6 fe80::4001:aff:fe80:9%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:9%2#123 Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: failed to init interface for address fe80::4001:aff:fe80:9%2 Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: Listening on routing socket on fd #21 for interface updates Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 19:10:40.938188 ntpd[1440]: 20 Jun 19:10:40 ntpd[1440]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 19:10:40.841405 ntpd[1440]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 16:33:02 UTC 2025 (1): Starting Jun 20 19:10:40.819057 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:10:40.944671 extend-filesystems[1436]: Resized filesystem in /dev/sda9 Jun 20 19:10:40.841445 ntpd[1440]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 20 19:10:40.904546 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jun 20 19:10:40.841461 ntpd[1440]: ---------------------------------------------------- Jun 20 19:10:40.905426 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:10:40.841484 ntpd[1440]: ntp-4 is maintained by Network Time Foundation, Jun 20 19:10:40.911626 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:10:40.841513 ntpd[1440]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 20 19:10:40.954412 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:10:40.841760 ntpd[1440]: corporation. Support and training for ntp-4 are Jun 20 19:10:40.841776 ntpd[1440]: available at https://www.nwtime.org/support Jun 20 19:10:40.841789 ntpd[1440]: ---------------------------------------------------- Jun 20 19:10:40.845443 ntpd[1440]: proto: precision = 0.109 usec (-23) Jun 20 19:10:40.845880 ntpd[1440]: basedate set to 2025-06-08 Jun 20 19:10:40.845901 ntpd[1440]: gps base set to 2025-06-08 (week 2370) Jun 20 19:10:40.853636 ntpd[1440]: Listen and drop on 0 v6wildcard [::]:123 Jun 20 19:10:40.853703 ntpd[1440]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 20 19:10:40.854648 ntpd[1440]: Listen normally on 2 lo 127.0.0.1:123 Jun 20 19:10:40.854904 ntpd[1440]: Listen normally on 3 eth0 10.128.0.9:123 Jun 20 19:10:40.855059 ntpd[1440]: Listen normally on 4 lo [::1]:123 Jun 20 19:10:40.855225 ntpd[1440]: bind(21) AF_INET6 fe80::4001:aff:fe80:9%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 19:10:40.855262 ntpd[1440]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:9%2#123 Jun 20 19:10:40.855288 ntpd[1440]: failed to init interface for address fe80::4001:aff:fe80:9%2 Jun 20 19:10:40.855462 ntpd[1440]: Listening on routing socket on fd #21 for interface updates Jun 20 19:10:40.859167 ntpd[1440]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 19:10:40.859211 ntpd[1440]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 19:10:40.972926 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:10:40.977354 jq[1466]: true Jun 20 19:10:40.990348 update_engine[1463]: I20250620 19:10:40.987814 1463 main.cc:92] Flatcar Update Engine starting Jun 20 19:10:40.995025 update_engine[1463]: I20250620 19:10:40.994977 1463 update_check_scheduler.cc:74] Next update check in 7m56s Jun 20 19:10:41.000646 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:10:41.000979 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:10:41.001454 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:10:41.001775 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:10:41.013253 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:10:41.014647 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:10:41.029898 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:10:41.030212 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:10:41.046223 systemd-logind[1458]: Watching system buttons on /dev/input/event1 (Power Button) Jun 20 19:10:41.046877 systemd-logind[1458]: Watching system buttons on /dev/input/event3 (Sleep Button) Jun 20 19:10:41.046917 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:10:41.047569 systemd-logind[1458]: New seat seat0. Jun 20 19:10:41.065619 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:10:41.096086 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:10:41.097354 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:10:41.103733 jq[1471]: true Jun 20 19:10:41.159076 dbus-daemon[1434]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 20 19:10:41.201232 tar[1470]: linux-amd64/LICENSE Jun 20 19:10:41.203485 tar[1470]: linux-amd64/helm Jun 20 19:10:41.208020 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:10:41.237960 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:10:41.254635 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:10:41.254920 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:10:41.255149 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:10:41.261384 bash[1502]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:10:41.278035 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 20 19:10:41.288497 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:10:41.288766 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:10:41.308041 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:10:41.332060 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:10:41.356729 systemd[1]: Starting sshkeys.service... Jun 20 19:10:41.415262 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 19:10:41.434881 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 19:10:41.573699 coreos-metadata[1507]: Jun 20 19:10:41.573 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jun 20 19:10:41.573699 coreos-metadata[1507]: Jun 20 19:10:41.573 INFO Fetch failed with 404: resource not found Jun 20 19:10:41.573699 coreos-metadata[1507]: Jun 20 19:10:41.573 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jun 20 19:10:41.573699 coreos-metadata[1507]: Jun 20 19:10:41.573 INFO Fetch successful Jun 20 19:10:41.573699 coreos-metadata[1507]: Jun 20 19:10:41.573 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jun 20 19:10:41.573699 coreos-metadata[1507]: Jun 20 19:10:41.573 INFO Fetch failed with 404: resource not found Jun 20 19:10:41.573699 coreos-metadata[1507]: Jun 20 19:10:41.573 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jun 20 19:10:41.573699 coreos-metadata[1507]: Jun 20 19:10:41.573 INFO Fetch failed with 404: resource not found Jun 20 19:10:41.573699 coreos-metadata[1507]: Jun 20 19:10:41.573 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jun 20 19:10:41.573699 coreos-metadata[1507]: Jun 20 19:10:41.573 INFO Fetch successful Jun 20 19:10:41.580553 unknown[1507]: wrote ssh authorized keys file for user: core Jun 20 19:10:41.581590 systemd-networkd[1381]: eth0: Gained IPv6LL Jun 20 19:10:41.592523 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:10:41.616125 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:10:41.634722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:41.648928 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:10:41.662828 dbus-daemon[1434]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 20 19:10:41.667952 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jun 20 19:10:41.675063 dbus-daemon[1434]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1503 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 20 19:10:41.680411 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 20 19:10:41.684684 update-ssh-keys[1520]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:10:41.694413 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 19:10:41.715520 init.sh[1524]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jun 20 19:10:41.715520 init.sh[1524]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jun 20 19:10:41.715520 init.sh[1524]: + /usr/bin/google_instance_setup Jun 20 19:10:41.717740 systemd[1]: Finished sshkeys.service. Jun 20 19:10:41.739426 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:10:41.760376 systemd[1]: Starting polkit.service - Authorization Manager... Jun 20 19:10:41.771865 locksmithd[1504]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:10:41.856166 polkitd[1538]: Started polkitd version 121 Jun 20 19:10:41.879296 polkitd[1538]: Loading rules from directory /etc/polkit-1/rules.d Jun 20 19:10:41.885187 polkitd[1538]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 20 19:10:41.905911 polkitd[1538]: Finished loading, compiling and executing 2 rules Jun 20 19:10:41.906755 dbus-daemon[1434]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 20 19:10:41.907026 systemd[1]: Started polkit.service - Authorization Manager. Jun 20 19:10:41.908039 polkitd[1538]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 20 19:10:41.983793 systemd-hostnamed[1503]: Hostname set to (transient) Jun 20 19:10:41.987216 systemd-resolved[1382]: System hostname changed to 'ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal'. Jun 20 19:10:42.002672 containerd[1472]: time="2025-06-20T19:10:42.001270027Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jun 20 19:10:42.058723 containerd[1472]: time="2025-06-20T19:10:42.055297622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:10:42.058723 containerd[1472]: time="2025-06-20T19:10:42.058074042Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.94-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:10:42.058723 containerd[1472]: time="2025-06-20T19:10:42.058116109Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 20 19:10:42.058723 containerd[1472]: time="2025-06-20T19:10:42.058164913Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 20 19:10:42.058723 containerd[1472]: time="2025-06-20T19:10:42.058413276Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 20 19:10:42.058723 containerd[1472]: time="2025-06-20T19:10:42.058441441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 20 19:10:42.058723 containerd[1472]: time="2025-06-20T19:10:42.058523044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:10:42.058723 containerd[1472]: time="2025-06-20T19:10:42.058544288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:10:42.059156 containerd[1472]: time="2025-06-20T19:10:42.058883934Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:10:42.059156 containerd[1472]: time="2025-06-20T19:10:42.058913032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 20 19:10:42.059156 containerd[1472]: time="2025-06-20T19:10:42.058937280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:10:42.059156 containerd[1472]: time="2025-06-20T19:10:42.058955166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 20 19:10:42.059156 containerd[1472]: time="2025-06-20T19:10:42.059082357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:10:42.060358 containerd[1472]: time="2025-06-20T19:10:42.059429317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:10:42.060358 containerd[1472]: time="2025-06-20T19:10:42.059666758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:10:42.060358 containerd[1472]: time="2025-06-20T19:10:42.059690116Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 20 19:10:42.060358 containerd[1472]: time="2025-06-20T19:10:42.059802981Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 20 19:10:42.060358 containerd[1472]: time="2025-06-20T19:10:42.059875410Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:10:42.072314 containerd[1472]: time="2025-06-20T19:10:42.071094960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 20 19:10:42.072314 containerd[1472]: time="2025-06-20T19:10:42.071176904Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 20 19:10:42.072314 containerd[1472]: time="2025-06-20T19:10:42.071204910Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 20 19:10:42.072314 containerd[1472]: time="2025-06-20T19:10:42.071241558Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 20 19:10:42.072314 containerd[1472]: time="2025-06-20T19:10:42.071268225Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 20 19:10:42.072314 containerd[1472]: time="2025-06-20T19:10:42.071494058Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 20 19:10:42.072314 containerd[1472]: time="2025-06-20T19:10:42.071879277Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 20 19:10:42.072314 containerd[1472]: time="2025-06-20T19:10:42.072068080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 20 19:10:42.072314 containerd[1472]: time="2025-06-20T19:10:42.072093179Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 20 19:10:42.072314 containerd[1472]: time="2025-06-20T19:10:42.072119377Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 20 19:10:42.072314 containerd[1472]: time="2025-06-20T19:10:42.072142879Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 20 19:10:42.072314 containerd[1472]: time="2025-06-20T19:10:42.072168332Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 20 19:10:42.072314 containerd[1472]: time="2025-06-20T19:10:42.072201135Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 20 19:10:42.072314 containerd[1472]: time="2025-06-20T19:10:42.072235985Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 20 19:10:42.073023 containerd[1472]: time="2025-06-20T19:10:42.072260123Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 20 19:10:42.075392 containerd[1472]: time="2025-06-20T19:10:42.073665381Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 20 19:10:42.075392 containerd[1472]: time="2025-06-20T19:10:42.073701255Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 20 19:10:42.075392 containerd[1472]: time="2025-06-20T19:10:42.073788162Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 20 19:10:42.075392 containerd[1472]: time="2025-06-20T19:10:42.073835136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.075392 containerd[1472]: time="2025-06-20T19:10:42.073859922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.075392 containerd[1472]: time="2025-06-20T19:10:42.073882300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.075392 containerd[1472]: time="2025-06-20T19:10:42.073906243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.075392 containerd[1472]: time="2025-06-20T19:10:42.073927323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.075392 containerd[1472]: time="2025-06-20T19:10:42.073949522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.075392 containerd[1472]: time="2025-06-20T19:10:42.073971617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.075392 containerd[1472]: time="2025-06-20T19:10:42.073993798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.075392 containerd[1472]: time="2025-06-20T19:10:42.074016776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.075392 containerd[1472]: time="2025-06-20T19:10:42.074042106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.075392 containerd[1472]: time="2025-06-20T19:10:42.074062829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.078424 containerd[1472]: time="2025-06-20T19:10:42.074082095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.078424 containerd[1472]: time="2025-06-20T19:10:42.074103290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.078424 containerd[1472]: time="2025-06-20T19:10:42.074126868Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 20 19:10:42.078424 containerd[1472]: time="2025-06-20T19:10:42.074161997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.078424 containerd[1472]: time="2025-06-20T19:10:42.074183662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.078424 containerd[1472]: time="2025-06-20T19:10:42.074203741Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 20 19:10:42.078424 containerd[1472]: time="2025-06-20T19:10:42.074277952Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 20 19:10:42.078424 containerd[1472]: time="2025-06-20T19:10:42.074306654Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 20 19:10:42.078424 containerd[1472]: time="2025-06-20T19:10:42.076376591Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 20 19:10:42.078424 containerd[1472]: time="2025-06-20T19:10:42.076416938Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 20 19:10:42.078424 containerd[1472]: time="2025-06-20T19:10:42.076457604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.078424 containerd[1472]: time="2025-06-20T19:10:42.076482140Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 20 19:10:42.078424 containerd[1472]: time="2025-06-20T19:10:42.076499965Z" level=info msg="NRI interface is disabled by configuration." Jun 20 19:10:42.078424 containerd[1472]: time="2025-06-20T19:10:42.076539877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 20 19:10:42.081233 containerd[1472]: time="2025-06-20T19:10:42.079504218Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 20 19:10:42.081233 containerd[1472]: time="2025-06-20T19:10:42.079602679Z" level=info msg="Connect containerd service" Jun 20 19:10:42.081233 containerd[1472]: time="2025-06-20T19:10:42.079666823Z" level=info msg="using legacy CRI server" Jun 20 19:10:42.081233 containerd[1472]: time="2025-06-20T19:10:42.079678624Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:10:42.081233 containerd[1472]: time="2025-06-20T19:10:42.080513020Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 20 19:10:42.088153 containerd[1472]: time="2025-06-20T19:10:42.085448930Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:10:42.088153 containerd[1472]: time="2025-06-20T19:10:42.085624761Z" level=info msg="Start subscribing containerd event" Jun 20 19:10:42.088153 containerd[1472]: time="2025-06-20T19:10:42.085698538Z" level=info msg="Start recovering state" Jun 20 19:10:42.088153 containerd[1472]: time="2025-06-20T19:10:42.085803616Z" level=info msg="Start event monitor" Jun 20 19:10:42.088153 containerd[1472]: time="2025-06-20T19:10:42.085838471Z" level=info msg="Start snapshots syncer" Jun 20 19:10:42.088153 containerd[1472]: time="2025-06-20T19:10:42.085856969Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:10:42.088153 containerd[1472]: time="2025-06-20T19:10:42.085869222Z" level=info msg="Start streaming server" Jun 20 19:10:42.088153 containerd[1472]: time="2025-06-20T19:10:42.088041348Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:10:42.088733 containerd[1472]: time="2025-06-20T19:10:42.088642052Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:10:42.090446 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:10:42.093175 containerd[1472]: time="2025-06-20T19:10:42.090748780Z" level=info msg="containerd successfully booted in 0.091487s" Jun 20 19:10:42.370635 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:10:42.429294 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:10:42.447117 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:10:42.464152 systemd[1]: Started sshd@0-10.128.0.9:22-147.75.109.163:34200.service - OpenSSH per-connection server daemon (147.75.109.163:34200). Jun 20 19:10:42.485532 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:10:42.485859 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:10:42.506834 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:10:42.555420 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:10:42.575529 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:10:42.593813 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:10:42.603793 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:10:42.890353 tar[1470]: linux-amd64/README.md Jun 20 19:10:42.901820 instance-setup[1532]: INFO Running google_set_multiqueue. Jun 20 19:10:42.908703 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:10:42.924961 sshd[1559]: Accepted publickey for core from 147.75.109.163 port 34200 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:10:42.930818 sshd-session[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:42.934173 instance-setup[1532]: INFO Set channels for eth0 to 2. Jun 20 19:10:42.941130 instance-setup[1532]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jun 20 19:10:42.943810 instance-setup[1532]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jun 20 19:10:42.944011 instance-setup[1532]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jun 20 19:10:42.945939 instance-setup[1532]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jun 20 19:10:42.946604 instance-setup[1532]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jun 20 19:10:42.948781 instance-setup[1532]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jun 20 19:10:42.949440 instance-setup[1532]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jun 20 19:10:42.949937 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:10:42.952031 instance-setup[1532]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jun 20 19:10:42.961415 instance-setup[1532]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jun 20 19:10:42.969115 instance-setup[1532]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jun 20 19:10:42.971560 instance-setup[1532]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jun 20 19:10:42.971814 instance-setup[1532]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jun 20 19:10:42.972355 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:10:42.986857 systemd-logind[1458]: New session 1 of user core. Jun 20 19:10:43.009971 init.sh[1524]: + /usr/bin/google_metadata_script_runner --script-type startup Jun 20 19:10:43.020432 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:10:43.043873 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:10:43.086374 (systemd)[1604]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:10:43.093027 systemd-logind[1458]: New session c1 of user core. Jun 20 19:10:43.248686 startup-script[1602]: INFO Starting startup scripts. Jun 20 19:10:43.261041 startup-script[1602]: INFO No startup scripts found in metadata. Jun 20 19:10:43.261353 startup-script[1602]: INFO Finished running startup scripts. Jun 20 19:10:43.298476 init.sh[1524]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jun 20 19:10:43.298728 init.sh[1524]: + daemon_pids=() Jun 20 19:10:43.298892 init.sh[1524]: + for d in accounts clock_skew network Jun 20 19:10:43.299254 init.sh[1524]: + daemon_pids+=($!) Jun 20 19:10:43.301976 init.sh[1612]: + /usr/bin/google_accounts_daemon Jun 20 19:10:43.302361 init.sh[1524]: + for d in accounts clock_skew network Jun 20 19:10:43.302361 init.sh[1524]: + daemon_pids+=($!) Jun 20 19:10:43.302361 init.sh[1524]: + for d in accounts clock_skew network Jun 20 19:10:43.303832 init.sh[1524]: + daemon_pids+=($!) Jun 20 19:10:43.304016 init.sh[1524]: + NOTIFY_SOCKET=/run/systemd/notify Jun 20 19:10:43.304897 init.sh[1524]: + /usr/bin/systemd-notify --ready Jun 20 19:10:43.305417 init.sh[1614]: + /usr/bin/google_network_daemon Jun 20 19:10:43.308672 init.sh[1613]: + /usr/bin/google_clock_skew_daemon Jun 20 19:10:43.332492 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jun 20 19:10:43.350352 init.sh[1524]: + wait -n 1612 1613 1614 Jun 20 19:10:43.420065 systemd[1604]: Queued start job for default target default.target. Jun 20 19:10:43.428558 systemd[1604]: Created slice app.slice - User Application Slice. Jun 20 19:10:43.428608 systemd[1604]: Reached target paths.target - Paths. Jun 20 19:10:43.428684 systemd[1604]: Reached target timers.target - Timers. Jun 20 19:10:43.439540 systemd[1604]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:10:43.463187 systemd[1604]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:10:43.463293 systemd[1604]: Reached target sockets.target - Sockets. Jun 20 19:10:43.464823 systemd[1604]: Reached target basic.target - Basic System. Jun 20 19:10:43.466569 systemd[1604]: Reached target default.target - Main User Target. Jun 20 19:10:43.466651 systemd[1604]: Startup finished in 358ms. Jun 20 19:10:43.467164 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:10:43.483584 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:10:43.746830 systemd[1]: Started sshd@1-10.128.0.9:22-147.75.109.163:34204.service - OpenSSH per-connection server daemon (147.75.109.163:34204). Jun 20 19:10:43.841474 google-networking[1614]: INFO Starting Google Networking daemon. Jun 20 19:10:43.843160 ntpd[1440]: 20 Jun 19:10:43 ntpd[1440]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:9%2]:123 Jun 20 19:10:43.842276 ntpd[1440]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:9%2]:123 Jun 20 19:10:43.876562 google-clock-skew[1613]: INFO Starting Google Clock Skew daemon. Jun 20 19:10:43.878907 groupadd[1628]: group added to /etc/group: name=google-sudoers, GID=1000 Jun 20 19:10:43.883620 google-clock-skew[1613]: INFO Clock drift token has changed: 0. Jun 20 19:10:43.884258 groupadd[1628]: group added to /etc/gshadow: name=google-sudoers Jun 20 19:10:43.935643 groupadd[1628]: new group: name=google-sudoers, GID=1000 Jun 20 19:10:43.964794 google-accounts[1612]: INFO Starting Google Accounts daemon. Jun 20 19:10:43.979114 google-accounts[1612]: WARNING OS Login not installed. Jun 20 19:10:43.980837 google-accounts[1612]: INFO Creating a new user account for 0. Jun 20 19:10:43.985544 init.sh[1639]: useradd: invalid user name '0': use --badname to ignore Jun 20 19:10:43.985339 google-accounts[1612]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jun 20 19:10:44.092652 sshd[1627]: Accepted publickey for core from 147.75.109.163 port 34204 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:10:44.095046 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:44.103050 systemd-logind[1458]: New session 2 of user core. Jun 20 19:10:44.108542 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:10:44.000602 google-clock-skew[1613]: INFO Synced system time with hardware clock. Jun 20 19:10:44.019648 systemd-journald[1111]: Time jumped backwards, rotating. Jun 20 19:10:44.001994 systemd-resolved[1382]: Clock change detected. Flushing caches. Jun 20 19:10:44.034909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:44.047226 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:10:44.052026 (kubelet)[1648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:10:44.058601 systemd[1]: Startup finished in 1.051s (kernel) + 12.629s (initrd) + 9.942s (userspace) = 23.623s. Jun 20 19:10:44.102518 sshd[1641]: Connection closed by 147.75.109.163 port 34204 Jun 20 19:10:44.103979 sshd-session[1627]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:44.109335 systemd[1]: sshd@1-10.128.0.9:22-147.75.109.163:34204.service: Deactivated successfully. Jun 20 19:10:44.112136 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 19:10:44.114700 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Jun 20 19:10:44.116143 systemd-logind[1458]: Removed session 2. Jun 20 19:10:44.165791 systemd[1]: Started sshd@2-10.128.0.9:22-147.75.109.163:34212.service - OpenSSH per-connection server daemon (147.75.109.163:34212). Jun 20 19:10:44.462240 sshd[1658]: Accepted publickey for core from 147.75.109.163 port 34212 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:10:44.463478 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:44.472041 systemd-logind[1458]: New session 3 of user core. Jun 20 19:10:44.477477 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:10:44.670842 sshd[1664]: Connection closed by 147.75.109.163 port 34212 Jun 20 19:10:44.672542 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:44.678957 systemd[1]: sshd@2-10.128.0.9:22-147.75.109.163:34212.service: Deactivated successfully. Jun 20 19:10:44.681854 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 19:10:44.683219 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Jun 20 19:10:44.684857 systemd-logind[1458]: Removed session 3. Jun 20 19:10:44.729700 systemd[1]: Started sshd@3-10.128.0.9:22-147.75.109.163:34218.service - OpenSSH per-connection server daemon (147.75.109.163:34218). Jun 20 19:10:44.972001 kubelet[1648]: E0620 19:10:44.971848 1648 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:10:44.975708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:10:44.975964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:10:44.976534 systemd[1]: kubelet.service: Consumed 1.332s CPU time, 270.3M memory peak. Jun 20 19:10:45.027101 sshd[1671]: Accepted publickey for core from 147.75.109.163 port 34218 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:10:45.028903 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:45.036091 systemd-logind[1458]: New session 4 of user core. Jun 20 19:10:45.045530 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:10:45.240608 sshd[1674]: Connection closed by 147.75.109.163 port 34218 Jun 20 19:10:45.241511 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:45.246756 systemd[1]: sshd@3-10.128.0.9:22-147.75.109.163:34218.service: Deactivated successfully. Jun 20 19:10:45.249056 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:10:45.250046 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:10:45.251534 systemd-logind[1458]: Removed session 4. Jun 20 19:10:45.301685 systemd[1]: Started sshd@4-10.128.0.9:22-147.75.109.163:34234.service - OpenSSH per-connection server daemon (147.75.109.163:34234). Jun 20 19:10:45.588164 sshd[1680]: Accepted publickey for core from 147.75.109.163 port 34234 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:10:45.589909 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:45.595804 systemd-logind[1458]: New session 5 of user core. Jun 20 19:10:45.611536 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:10:45.780483 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:10:45.781032 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:10:45.795172 sudo[1683]: pam_unix(sudo:session): session closed for user root Jun 20 19:10:45.837470 sshd[1682]: Connection closed by 147.75.109.163 port 34234 Jun 20 19:10:45.838606 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:45.844617 systemd[1]: sshd@4-10.128.0.9:22-147.75.109.163:34234.service: Deactivated successfully. Jun 20 19:10:45.846885 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:10:45.847895 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:10:45.849426 systemd-logind[1458]: Removed session 5. Jun 20 19:10:45.898692 systemd[1]: Started sshd@5-10.128.0.9:22-147.75.109.163:34698.service - OpenSSH per-connection server daemon (147.75.109.163:34698). Jun 20 19:10:46.186467 sshd[1689]: Accepted publickey for core from 147.75.109.163 port 34698 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:10:46.188034 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:46.195863 systemd-logind[1458]: New session 6 of user core. Jun 20 19:10:46.199488 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:10:46.363666 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:10:46.364158 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:10:46.369112 sudo[1693]: pam_unix(sudo:session): session closed for user root Jun 20 19:10:46.382903 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:10:46.383411 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:10:46.399749 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:10:46.438708 augenrules[1715]: No rules Jun 20 19:10:46.439852 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:10:46.440221 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:10:46.441674 sudo[1692]: pam_unix(sudo:session): session closed for user root Jun 20 19:10:46.484201 sshd[1691]: Connection closed by 147.75.109.163 port 34698 Jun 20 19:10:46.485216 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:46.490752 systemd[1]: sshd@5-10.128.0.9:22-147.75.109.163:34698.service: Deactivated successfully. Jun 20 19:10:46.493059 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:10:46.494205 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:10:46.495609 systemd-logind[1458]: Removed session 6. Jun 20 19:10:46.542766 systemd[1]: Started sshd@6-10.128.0.9:22-147.75.109.163:34710.service - OpenSSH per-connection server daemon (147.75.109.163:34710). Jun 20 19:10:46.832907 sshd[1724]: Accepted publickey for core from 147.75.109.163 port 34710 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:10:46.834742 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:46.841626 systemd-logind[1458]: New session 7 of user core. Jun 20 19:10:46.852612 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:10:47.012283 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:10:47.012789 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:10:47.484658 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:10:47.496883 (dockerd)[1744]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:10:47.943535 dockerd[1744]: time="2025-06-20T19:10:47.943371007Z" level=info msg="Starting up" Jun 20 19:10:48.211221 dockerd[1744]: time="2025-06-20T19:10:48.210884805Z" level=info msg="Loading containers: start." Jun 20 19:10:48.426292 kernel: Initializing XFRM netlink socket Jun 20 19:10:48.536425 systemd-networkd[1381]: docker0: Link UP Jun 20 19:10:48.571188 dockerd[1744]: time="2025-06-20T19:10:48.571124036Z" level=info msg="Loading containers: done." Jun 20 19:10:48.594321 dockerd[1744]: time="2025-06-20T19:10:48.592958669Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:10:48.594321 dockerd[1744]: time="2025-06-20T19:10:48.593092746Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jun 20 19:10:48.594321 dockerd[1744]: time="2025-06-20T19:10:48.593295215Z" level=info msg="Daemon has completed initialization" Jun 20 19:10:48.594714 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3390987953-merged.mount: Deactivated successfully. Jun 20 19:10:48.634092 dockerd[1744]: time="2025-06-20T19:10:48.633841558Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:10:48.634246 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:10:49.473734 containerd[1472]: time="2025-06-20T19:10:49.473686149Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 20 19:10:50.054152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2942851785.mount: Deactivated successfully. Jun 20 19:10:51.825927 containerd[1472]: time="2025-06-20T19:10:51.825854699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:51.827937 containerd[1472]: time="2025-06-20T19:10:51.827852158Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30085727" Jun 20 19:10:51.829001 containerd[1472]: time="2025-06-20T19:10:51.828940295Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:51.833597 containerd[1472]: time="2025-06-20T19:10:51.833523962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:51.839271 containerd[1472]: time="2025-06-20T19:10:51.838692489Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 2.364947471s" Jun 20 19:10:51.839271 containerd[1472]: time="2025-06-20T19:10:51.838743277Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jun 20 19:10:51.841577 containerd[1472]: time="2025-06-20T19:10:51.841523734Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 20 19:10:53.342810 containerd[1472]: time="2025-06-20T19:10:53.342738659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:53.344356 containerd[1472]: time="2025-06-20T19:10:53.344281726Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26020880" Jun 20 19:10:53.346190 containerd[1472]: time="2025-06-20T19:10:53.345665468Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:53.349338 containerd[1472]: time="2025-06-20T19:10:53.349292916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:53.350774 containerd[1472]: time="2025-06-20T19:10:53.350734945Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.509166126s" Jun 20 19:10:53.350917 containerd[1472]: time="2025-06-20T19:10:53.350893289Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jun 20 19:10:53.351660 containerd[1472]: time="2025-06-20T19:10:53.351560794Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 20 19:10:54.728209 containerd[1472]: time="2025-06-20T19:10:54.728142950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:54.729795 containerd[1472]: time="2025-06-20T19:10:54.729717584Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20156971" Jun 20 19:10:54.731048 containerd[1472]: time="2025-06-20T19:10:54.730978007Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:54.736230 containerd[1472]: time="2025-06-20T19:10:54.734681962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:54.736230 containerd[1472]: time="2025-06-20T19:10:54.736033688Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.384429528s" Jun 20 19:10:54.736230 containerd[1472]: time="2025-06-20T19:10:54.736070686Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jun 20 19:10:54.737055 containerd[1472]: time="2025-06-20T19:10:54.737007111Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 20 19:10:55.226702 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:10:55.236538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:55.615845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:55.621592 (kubelet)[2000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:10:55.713289 kubelet[2000]: E0620 19:10:55.712823 2000 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:10:55.721582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:10:55.721848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:10:55.723252 systemd[1]: kubelet.service: Consumed 226ms CPU time, 108.4M memory peak. Jun 20 19:10:56.191090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount571488927.mount: Deactivated successfully. Jun 20 19:10:56.887083 containerd[1472]: time="2025-06-20T19:10:56.887019295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:56.888303 containerd[1472]: time="2025-06-20T19:10:56.888222136Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31894641" Jun 20 19:10:56.889574 containerd[1472]: time="2025-06-20T19:10:56.889499308Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:56.892224 containerd[1472]: time="2025-06-20T19:10:56.892161430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:56.893279 containerd[1472]: time="2025-06-20T19:10:56.893031715Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 2.155712335s" Jun 20 19:10:56.893279 containerd[1472]: time="2025-06-20T19:10:56.893079123Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jun 20 19:10:56.893694 containerd[1472]: time="2025-06-20T19:10:56.893647635Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 20 19:10:57.351097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1930478219.mount: Deactivated successfully. Jun 20 19:10:58.655161 containerd[1472]: time="2025-06-20T19:10:58.655092178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:58.656771 containerd[1472]: time="2025-06-20T19:10:58.656715061Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20948880" Jun 20 19:10:58.657808 containerd[1472]: time="2025-06-20T19:10:58.657738493Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:58.661899 containerd[1472]: time="2025-06-20T19:10:58.661317336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:58.662921 containerd[1472]: time="2025-06-20T19:10:58.662878506Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.769183323s" Jun 20 19:10:58.663014 containerd[1472]: time="2025-06-20T19:10:58.662926240Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jun 20 19:10:58.663721 containerd[1472]: time="2025-06-20T19:10:58.663690946Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:10:59.142456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142455001.mount: Deactivated successfully. Jun 20 19:10:59.148713 containerd[1472]: time="2025-06-20T19:10:59.148647182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:59.150052 containerd[1472]: time="2025-06-20T19:10:59.149981076Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Jun 20 19:10:59.151226 containerd[1472]: time="2025-06-20T19:10:59.151153176Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:59.154163 containerd[1472]: time="2025-06-20T19:10:59.154095721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:59.156098 containerd[1472]: time="2025-06-20T19:10:59.155137495Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 491.401257ms" Jun 20 19:10:59.156098 containerd[1472]: time="2025-06-20T19:10:59.155180706Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:10:59.156403 containerd[1472]: time="2025-06-20T19:10:59.156296011Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 20 19:10:59.657433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248269992.mount: Deactivated successfully. Jun 20 19:11:01.869231 containerd[1472]: time="2025-06-20T19:11:01.869155213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:01.871021 containerd[1472]: time="2025-06-20T19:11:01.870956660Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58251906" Jun 20 19:11:01.872349 containerd[1472]: time="2025-06-20T19:11:01.872252012Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:01.876588 containerd[1472]: time="2025-06-20T19:11:01.876529486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:01.878863 containerd[1472]: time="2025-06-20T19:11:01.878121315Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.721784321s" Jun 20 19:11:01.878863 containerd[1472]: time="2025-06-20T19:11:01.878170193Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jun 20 19:11:05.972652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:11:05.985452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:11:07.381081 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:11:07.381232 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:11:07.381668 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:11:07.382013 systemd[1]: kubelet.service: Consumed 108ms CPU time, 78M memory peak. Jun 20 19:11:07.390775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:11:07.445764 systemd[1]: Reload requested from client PID 2160 ('systemctl') (unit session-7.scope)... Jun 20 19:11:07.445797 systemd[1]: Reloading... Jun 20 19:11:07.617729 zram_generator::config[2206]: No configuration found. Jun 20 19:11:07.772611 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:11:07.917956 systemd[1]: Reloading finished in 471 ms. Jun 20 19:11:07.992474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:11:07.993407 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:11:08.004181 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:11:08.005223 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:11:08.005605 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:11:08.005679 systemd[1]: kubelet.service: Consumed 160ms CPU time, 100M memory peak. Jun 20 19:11:08.016726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:11:08.285971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:11:08.298833 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:11:08.355919 kubelet[2264]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:11:08.355919 kubelet[2264]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:11:08.355919 kubelet[2264]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:11:08.356460 kubelet[2264]: I0620 19:11:08.355972 2264 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:11:09.203511 kubelet[2264]: I0620 19:11:09.203462 2264 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 19:11:09.204302 kubelet[2264]: I0620 19:11:09.203783 2264 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:11:09.204601 kubelet[2264]: I0620 19:11:09.204581 2264 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 19:11:09.246318 kubelet[2264]: E0620 19:11:09.246233 2264 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 19:11:09.251539 kubelet[2264]: I0620 19:11:09.251318 2264 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:11:09.259993 kubelet[2264]: E0620 19:11:09.259945 2264 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 19:11:09.259993 kubelet[2264]: I0620 19:11:09.259985 2264 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 19:11:09.264799 kubelet[2264]: I0620 19:11:09.264750 2264 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:11:09.265232 kubelet[2264]: I0620 19:11:09.265173 2264 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:11:09.265490 kubelet[2264]: I0620 19:11:09.265222 2264 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:11:09.265689 kubelet[2264]: I0620 19:11:09.265493 2264 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:11:09.265689 kubelet[2264]: I0620 19:11:09.265512 2264 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 19:11:09.265689 kubelet[2264]: I0620 19:11:09.265679 2264 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:11:09.269653 kubelet[2264]: I0620 19:11:09.269618 2264 kubelet.go:480] "Attempting to sync node with API server" Jun 20 19:11:09.269653 kubelet[2264]: I0620 19:11:09.269650 2264 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:11:09.269945 kubelet[2264]: I0620 19:11:09.269683 2264 kubelet.go:386] "Adding apiserver pod source" Jun 20 19:11:09.269945 kubelet[2264]: I0620 19:11:09.269706 2264 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:11:09.287867 kubelet[2264]: E0620 19:11:09.287633 2264 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 19:11:09.288121 kubelet[2264]: E0620 19:11:09.287813 2264 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 19:11:09.288219 kubelet[2264]: I0620 19:11:09.288188 2264 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 19:11:09.288887 kubelet[2264]: I0620 19:11:09.288849 2264 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 19:11:09.290010 kubelet[2264]: W0620 19:11:09.289970 2264 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:11:09.304090 kubelet[2264]: I0620 19:11:09.304030 2264 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:11:09.304198 kubelet[2264]: I0620 19:11:09.304116 2264 server.go:1289] "Started kubelet" Jun 20 19:11:09.305998 kubelet[2264]: I0620 19:11:09.305295 2264 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:11:09.311704 kubelet[2264]: I0620 19:11:09.311632 2264 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:11:09.312156 kubelet[2264]: I0620 19:11:09.312126 2264 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:11:09.312995 kubelet[2264]: I0620 19:11:09.312971 2264 server.go:317] "Adding debug handlers to kubelet server" Jun 20 19:11:09.317895 kubelet[2264]: I0620 19:11:09.317872 2264 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:11:09.323102 kubelet[2264]: I0620 19:11:09.323071 2264 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:11:09.323868 kubelet[2264]: E0620 19:11:09.320701 2264 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.9:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.9:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal.184ad5f8b2226785 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal,UID:ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal,},FirstTimestamp:2025-06-20 19:11:09.304063877 +0000 UTC m=+0.999086737,LastTimestamp:2025-06-20 19:11:09.304063877 +0000 UTC m=+0.999086737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal,}" Jun 20 19:11:09.325594 kubelet[2264]: I0620 19:11:09.325567 2264 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:11:09.326362 kubelet[2264]: E0620 19:11:09.325824 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" not found" Jun 20 19:11:09.326637 kubelet[2264]: I0620 19:11:09.326607 2264 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:11:09.326738 kubelet[2264]: I0620 19:11:09.326721 2264 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:11:09.327463 kubelet[2264]: E0620 19:11:09.327422 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.9:6443: connect: connection refused" interval="200ms" Jun 20 19:11:09.327955 kubelet[2264]: E0620 19:11:09.327916 2264 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 19:11:09.329710 kubelet[2264]: I0620 19:11:09.329679 2264 factory.go:223] Registration of the systemd container factory successfully Jun 20 19:11:09.329801 kubelet[2264]: I0620 19:11:09.329787 2264 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:11:09.333541 kubelet[2264]: I0620 19:11:09.332189 2264 factory.go:223] Registration of the containerd container factory successfully Jun 20 19:11:09.354044 kubelet[2264]: I0620 19:11:09.353987 2264 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 19:11:09.356049 kubelet[2264]: I0620 19:11:09.356023 2264 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 19:11:09.356600 kubelet[2264]: I0620 19:11:09.356580 2264 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 19:11:09.356729 kubelet[2264]: I0620 19:11:09.356712 2264 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:11:09.356875 kubelet[2264]: I0620 19:11:09.356862 2264 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 19:11:09.357046 kubelet[2264]: E0620 19:11:09.357016 2264 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:11:09.365361 kubelet[2264]: E0620 19:11:09.365152 2264 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 19:11:09.365598 kubelet[2264]: E0620 19:11:09.365453 2264 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:11:09.384794 kubelet[2264]: I0620 19:11:09.384770 2264 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:11:09.384918 kubelet[2264]: I0620 19:11:09.384832 2264 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:11:09.384918 kubelet[2264]: I0620 19:11:09.384858 2264 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:11:09.388189 kubelet[2264]: I0620 19:11:09.388163 2264 policy_none.go:49] "None policy: Start" Jun 20 19:11:09.388319 kubelet[2264]: I0620 19:11:09.388200 2264 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:11:09.388319 kubelet[2264]: I0620 19:11:09.388219 2264 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:11:09.396542 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:11:09.412523 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:11:09.417169 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:11:09.426120 kubelet[2264]: E0620 19:11:09.425220 2264 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 19:11:09.426120 kubelet[2264]: I0620 19:11:09.425483 2264 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:11:09.426120 kubelet[2264]: I0620 19:11:09.425499 2264 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:11:09.426120 kubelet[2264]: I0620 19:11:09.425839 2264 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:11:09.427564 kubelet[2264]: E0620 19:11:09.427532 2264 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:11:09.427676 kubelet[2264]: E0620 19:11:09.427606 2264 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" not found" Jun 20 19:11:09.499910 systemd[1]: Created slice kubepods-burstable-pod82a72703a4fe51b383501f9398ebf963.slice - libcontainer container kubepods-burstable-pod82a72703a4fe51b383501f9398ebf963.slice. Jun 20 19:11:09.510221 kubelet[2264]: E0620 19:11:09.510165 2264 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:09.516672 systemd[1]: Created slice kubepods-burstable-pod9a99fb32fb9fe7012b5ca3865c463c13.slice - libcontainer container kubepods-burstable-pod9a99fb32fb9fe7012b5ca3865c463c13.slice. Jun 20 19:11:09.525055 kubelet[2264]: E0620 19:11:09.524788 2264 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:09.528292 systemd[1]: Created slice kubepods-burstable-pode7ebf05b049163054f8514d5655bc8d6.slice - libcontainer container kubepods-burstable-pode7ebf05b049163054f8514d5655bc8d6.slice. Jun 20 19:11:09.530645 kubelet[2264]: E0620 19:11:09.530490 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.9:6443: connect: connection refused" interval="400ms" Jun 20 19:11:09.530856 kubelet[2264]: I0620 19:11:09.530785 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a99fb32fb9fe7012b5ca3865c463c13-ca-certs\") pod \"kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"9a99fb32fb9fe7012b5ca3865c463c13\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:09.531115 kubelet[2264]: I0620 19:11:09.530934 2264 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:09.531115 kubelet[2264]: I0620 19:11:09.530838 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a99fb32fb9fe7012b5ca3865c463c13-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"9a99fb32fb9fe7012b5ca3865c463c13\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:09.531115 kubelet[2264]: I0620 19:11:09.530988 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a99fb32fb9fe7012b5ca3865c463c13-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"9a99fb32fb9fe7012b5ca3865c463c13\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:09.531115 kubelet[2264]: I0620 19:11:09.531064 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a99fb32fb9fe7012b5ca3865c463c13-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"9a99fb32fb9fe7012b5ca3865c463c13\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:09.531604 kubelet[2264]: E0620 19:11:09.531394 2264 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.9:6443/api/v1/nodes\": dial tcp 10.128.0.9:6443: connect: connection refused" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:09.531604 kubelet[2264]: I0620 19:11:09.531095 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82a72703a4fe51b383501f9398ebf963-ca-certs\") pod \"kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"82a72703a4fe51b383501f9398ebf963\") " pod="kube-system/kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:09.531604 kubelet[2264]: I0620 19:11:09.531491 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82a72703a4fe51b383501f9398ebf963-k8s-certs\") pod \"kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"82a72703a4fe51b383501f9398ebf963\") " pod="kube-system/kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:09.531604 kubelet[2264]: I0620 19:11:09.531545 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82a72703a4fe51b383501f9398ebf963-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"82a72703a4fe51b383501f9398ebf963\") " pod="kube-system/kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:09.533065 kubelet[2264]: E0620 19:11:09.533023 2264 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:09.632918 kubelet[2264]: I0620 19:11:09.632619 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a99fb32fb9fe7012b5ca3865c463c13-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"9a99fb32fb9fe7012b5ca3865c463c13\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:09.633303 kubelet[2264]: I0620 19:11:09.633220 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7ebf05b049163054f8514d5655bc8d6-kubeconfig\") pod \"kube-scheduler-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"e7ebf05b049163054f8514d5655bc8d6\") " pod="kube-system/kube-scheduler-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:09.738460 kubelet[2264]: I0620 19:11:09.738414 2264 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:09.738863 kubelet[2264]: E0620 19:11:09.738820 2264 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.9:6443/api/v1/nodes\": dial tcp 10.128.0.9:6443: connect: connection refused" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:09.812704 containerd[1472]: time="2025-06-20T19:11:09.812572625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal,Uid:82a72703a4fe51b383501f9398ebf963,Namespace:kube-system,Attempt:0,}" Jun 20 19:11:09.826572 containerd[1472]: time="2025-06-20T19:11:09.826523982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal,Uid:9a99fb32fb9fe7012b5ca3865c463c13,Namespace:kube-system,Attempt:0,}" Jun 20 19:11:09.834480 containerd[1472]: time="2025-06-20T19:11:09.834437266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal,Uid:e7ebf05b049163054f8514d5655bc8d6,Namespace:kube-system,Attempt:0,}" Jun 20 19:11:09.931559 kubelet[2264]: E0620 19:11:09.931459 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.9:6443: connect: connection refused" interval="800ms" Jun 20 19:11:10.111048 kubelet[2264]: E0620 19:11:10.110898 2264 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 19:11:10.148960 kubelet[2264]: I0620 19:11:10.148905 2264 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:10.149740 kubelet[2264]: E0620 19:11:10.149635 2264 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.9:6443/api/v1/nodes\": dial tcp 10.128.0.9:6443: connect: connection refused" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:10.174804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1346543431.mount: Deactivated successfully. Jun 20 19:11:10.183076 containerd[1472]: time="2025-06-20T19:11:10.182997547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:11:10.185524 containerd[1472]: time="2025-06-20T19:11:10.185467398Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:11:10.187744 containerd[1472]: time="2025-06-20T19:11:10.187674645Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jun 20 19:11:10.188771 containerd[1472]: time="2025-06-20T19:11:10.188712858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 19:11:10.191489 containerd[1472]: time="2025-06-20T19:11:10.191432785Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:11:10.193776 containerd[1472]: time="2025-06-20T19:11:10.193569729Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 19:11:10.193776 containerd[1472]: time="2025-06-20T19:11:10.193684559Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:11:10.197052 containerd[1472]: time="2025-06-20T19:11:10.197010319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:11:10.199792 containerd[1472]: time="2025-06-20T19:11:10.199133274Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 386.423721ms" Jun 20 19:11:10.201767 containerd[1472]: time="2025-06-20T19:11:10.201709358Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 375.071445ms" Jun 20 19:11:10.215280 containerd[1472]: time="2025-06-20T19:11:10.215223047Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 380.680273ms" Jun 20 19:11:10.423151 kubelet[2264]: E0620 19:11:10.422991 2264 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 19:11:10.424122 containerd[1472]: time="2025-06-20T19:11:10.422126960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:10.429423 containerd[1472]: time="2025-06-20T19:11:10.429024950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:10.429423 containerd[1472]: time="2025-06-20T19:11:10.429077887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:10.429423 containerd[1472]: time="2025-06-20T19:11:10.429339698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:10.429423 containerd[1472]: time="2025-06-20T19:11:10.428814936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:10.429423 containerd[1472]: time="2025-06-20T19:11:10.428925681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:10.429423 containerd[1472]: time="2025-06-20T19:11:10.428954977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:10.430716 containerd[1472]: time="2025-06-20T19:11:10.430094203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:10.430716 containerd[1472]: time="2025-06-20T19:11:10.430159259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:10.430716 containerd[1472]: time="2025-06-20T19:11:10.430184622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:10.430716 containerd[1472]: time="2025-06-20T19:11:10.430325048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:10.433324 containerd[1472]: time="2025-06-20T19:11:10.432674514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:10.472510 kubelet[2264]: E0620 19:11:10.472445 2264 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 19:11:10.478536 systemd[1]: Started cri-containerd-53513f55dbaeb4730cbbaf4044e2a16cda1f61a8ead8af8f04479c5564651695.scope - libcontainer container 53513f55dbaeb4730cbbaf4044e2a16cda1f61a8ead8af8f04479c5564651695. Jun 20 19:11:10.490493 systemd[1]: Started cri-containerd-33181eef2b39037bd06ec9b5128b6c8d26cf2f63e12b7d951fe8abf47884ee1d.scope - libcontainer container 33181eef2b39037bd06ec9b5128b6c8d26cf2f63e12b7d951fe8abf47884ee1d. Jun 20 19:11:10.492525 systemd[1]: Started cri-containerd-4289a50c2797125a3b38c44a8b8a46d916760d4b0c7b3eb1140a78ca22bc7455.scope - libcontainer container 4289a50c2797125a3b38c44a8b8a46d916760d4b0c7b3eb1140a78ca22bc7455. Jun 20 19:11:10.592227 containerd[1472]: time="2025-06-20T19:11:10.592172805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal,Uid:82a72703a4fe51b383501f9398ebf963,Namespace:kube-system,Attempt:0,} returns sandbox id \"53513f55dbaeb4730cbbaf4044e2a16cda1f61a8ead8af8f04479c5564651695\"" Jun 20 19:11:10.595003 kubelet[2264]: E0620 19:11:10.594610 2264 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-21291" Jun 20 19:11:10.603345 containerd[1472]: time="2025-06-20T19:11:10.602676885Z" level=info msg="CreateContainer within sandbox \"53513f55dbaeb4730cbbaf4044e2a16cda1f61a8ead8af8f04479c5564651695\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:11:10.612315 containerd[1472]: time="2025-06-20T19:11:10.611254823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal,Uid:e7ebf05b049163054f8514d5655bc8d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4289a50c2797125a3b38c44a8b8a46d916760d4b0c7b3eb1140a78ca22bc7455\"" Jun 20 19:11:10.616981 kubelet[2264]: E0620 19:11:10.616936 2264 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-21291" Jun 20 19:11:10.621255 containerd[1472]: time="2025-06-20T19:11:10.621217941Z" level=info msg="CreateContainer within sandbox \"4289a50c2797125a3b38c44a8b8a46d916760d4b0c7b3eb1140a78ca22bc7455\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:11:10.631539 containerd[1472]: time="2025-06-20T19:11:10.631479344Z" level=info msg="CreateContainer within sandbox \"53513f55dbaeb4730cbbaf4044e2a16cda1f61a8ead8af8f04479c5564651695\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3837dd010e407cd25b47e25afd0e23618331e092a36ac4bdce80dab6179c831c\"" Jun 20 19:11:10.632593 containerd[1472]: time="2025-06-20T19:11:10.632553792Z" level=info msg="StartContainer for \"3837dd010e407cd25b47e25afd0e23618331e092a36ac4bdce80dab6179c831c\"" Jun 20 19:11:10.639604 containerd[1472]: time="2025-06-20T19:11:10.639560341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal,Uid:9a99fb32fb9fe7012b5ca3865c463c13,Namespace:kube-system,Attempt:0,} returns sandbox id \"33181eef2b39037bd06ec9b5128b6c8d26cf2f63e12b7d951fe8abf47884ee1d\"" Jun 20 19:11:10.641769 kubelet[2264]: E0620 19:11:10.641729 2264 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flat" Jun 20 19:11:10.646663 containerd[1472]: time="2025-06-20T19:11:10.646622153Z" level=info msg="CreateContainer within sandbox \"33181eef2b39037bd06ec9b5128b6c8d26cf2f63e12b7d951fe8abf47884ee1d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:11:10.654160 containerd[1472]: time="2025-06-20T19:11:10.654098849Z" level=info msg="CreateContainer within sandbox \"4289a50c2797125a3b38c44a8b8a46d916760d4b0c7b3eb1140a78ca22bc7455\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c3eb433a89a02109001a3c26d8150739a641a8480d7071e6b8b954f0e4a305d9\"" Jun 20 19:11:10.656304 containerd[1472]: time="2025-06-20T19:11:10.656185876Z" level=info msg="StartContainer for \"c3eb433a89a02109001a3c26d8150739a641a8480d7071e6b8b954f0e4a305d9\"" Jun 20 19:11:10.674666 containerd[1472]: time="2025-06-20T19:11:10.674553357Z" level=info msg="CreateContainer within sandbox \"33181eef2b39037bd06ec9b5128b6c8d26cf2f63e12b7d951fe8abf47884ee1d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a28cbf0078078ad46e617d64e7c5a85f7b9bee49782a58960859e15e7490387c\"" Jun 20 19:11:10.676898 containerd[1472]: time="2025-06-20T19:11:10.675501722Z" level=info msg="StartContainer for \"a28cbf0078078ad46e617d64e7c5a85f7b9bee49782a58960859e15e7490387c\"" Jun 20 19:11:10.688501 systemd[1]: Started cri-containerd-3837dd010e407cd25b47e25afd0e23618331e092a36ac4bdce80dab6179c831c.scope - libcontainer container 3837dd010e407cd25b47e25afd0e23618331e092a36ac4bdce80dab6179c831c. Jun 20 19:11:10.728484 systemd[1]: Started cri-containerd-c3eb433a89a02109001a3c26d8150739a641a8480d7071e6b8b954f0e4a305d9.scope - libcontainer container c3eb433a89a02109001a3c26d8150739a641a8480d7071e6b8b954f0e4a305d9. Jun 20 19:11:10.734371 kubelet[2264]: E0620 19:11:10.733783 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.9:6443: connect: connection refused" interval="1.6s" Jun 20 19:11:10.762896 systemd[1]: Started cri-containerd-a28cbf0078078ad46e617d64e7c5a85f7b9bee49782a58960859e15e7490387c.scope - libcontainer container a28cbf0078078ad46e617d64e7c5a85f7b9bee49782a58960859e15e7490387c. Jun 20 19:11:10.806446 containerd[1472]: time="2025-06-20T19:11:10.805008258Z" level=info msg="StartContainer for \"3837dd010e407cd25b47e25afd0e23618331e092a36ac4bdce80dab6179c831c\" returns successfully" Jun 20 19:11:10.869329 kubelet[2264]: E0620 19:11:10.865122 2264 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 19:11:10.892009 containerd[1472]: time="2025-06-20T19:11:10.891959259Z" level=info msg="StartContainer for \"c3eb433a89a02109001a3c26d8150739a641a8480d7071e6b8b954f0e4a305d9\" returns successfully" Jun 20 19:11:10.911515 containerd[1472]: time="2025-06-20T19:11:10.911463184Z" level=info msg="StartContainer for \"a28cbf0078078ad46e617d64e7c5a85f7b9bee49782a58960859e15e7490387c\" returns successfully" Jun 20 19:11:10.956023 kubelet[2264]: I0620 19:11:10.955663 2264 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:11.388372 kubelet[2264]: E0620 19:11:11.387544 2264 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:11.388372 kubelet[2264]: E0620 19:11:11.387630 2264 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:11.405138 kubelet[2264]: E0620 19:11:11.405101 2264 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:11.809249 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 20 19:11:12.399238 kubelet[2264]: E0620 19:11:12.399201 2264 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:12.401287 kubelet[2264]: E0620 19:11:12.400070 2264 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:13.402439 kubelet[2264]: E0620 19:11:13.401769 2264 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:13.493442 kubelet[2264]: E0620 19:11:13.492334 2264 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:14.500677 kubelet[2264]: E0620 19:11:14.500603 2264 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:14.636872 kubelet[2264]: I0620 19:11:14.636746 2264 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:14.636872 kubelet[2264]: E0620 19:11:14.636824 2264 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\": node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" not found" Jun 20 19:11:14.727156 kubelet[2264]: I0620 19:11:14.727098 2264 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:14.745179 kubelet[2264]: E0620 19:11:14.745110 2264 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:14.745179 kubelet[2264]: I0620 19:11:14.745158 2264 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:14.749614 kubelet[2264]: E0620 19:11:14.749570 2264 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:14.749614 kubelet[2264]: I0620 19:11:14.749616 2264 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:14.754338 kubelet[2264]: E0620 19:11:14.753404 2264 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:15.285080 kubelet[2264]: I0620 19:11:15.283650 2264 apiserver.go:52] "Watching apiserver" Jun 20 19:11:15.326810 kubelet[2264]: I0620 19:11:15.326772 2264 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:11:16.809702 systemd[1]: Reload requested from client PID 2553 ('systemctl') (unit session-7.scope)... Jun 20 19:11:16.809725 systemd[1]: Reloading... Jun 20 19:11:16.968463 zram_generator::config[2601]: No configuration found. Jun 20 19:11:17.115413 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:11:17.297667 systemd[1]: Reloading finished in 487 ms. Jun 20 19:11:17.340717 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:11:17.371903 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:11:17.372283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:11:17.372371 systemd[1]: kubelet.service: Consumed 1.521s CPU time, 136.2M memory peak. Jun 20 19:11:17.382691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:11:17.762482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:11:17.764618 (kubelet)[2646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:11:17.839922 kubelet[2646]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:11:17.839922 kubelet[2646]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:11:17.839922 kubelet[2646]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:11:17.839922 kubelet[2646]: I0620 19:11:17.836083 2646 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:11:17.850744 kubelet[2646]: I0620 19:11:17.850418 2646 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 19:11:17.850744 kubelet[2646]: I0620 19:11:17.850449 2646 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:11:17.850992 kubelet[2646]: I0620 19:11:17.850775 2646 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 19:11:17.853302 kubelet[2646]: I0620 19:11:17.852942 2646 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 20 19:11:17.856968 kubelet[2646]: I0620 19:11:17.856889 2646 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:11:17.862389 kubelet[2646]: E0620 19:11:17.862336 2646 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 19:11:17.862702 kubelet[2646]: I0620 19:11:17.862393 2646 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 19:11:17.867471 kubelet[2646]: I0620 19:11:17.867437 2646 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:11:17.867954 kubelet[2646]: I0620 19:11:17.867903 2646 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:11:17.868215 kubelet[2646]: I0620 19:11:17.867952 2646 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:11:17.868542 kubelet[2646]: I0620 19:11:17.868223 2646 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:11:17.868542 kubelet[2646]: I0620 19:11:17.868243 2646 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 19:11:17.868542 kubelet[2646]: I0620 19:11:17.868341 2646 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:11:17.868691 kubelet[2646]: I0620 19:11:17.868593 2646 kubelet.go:480] "Attempting to sync node with API server" Jun 20 19:11:17.870385 kubelet[2646]: I0620 19:11:17.870171 2646 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:11:17.870385 kubelet[2646]: I0620 19:11:17.870248 2646 kubelet.go:386] "Adding apiserver pod source" Jun 20 19:11:17.872255 kubelet[2646]: I0620 19:11:17.871671 2646 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:11:17.882624 kubelet[2646]: I0620 19:11:17.881759 2646 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 19:11:17.882741 kubelet[2646]: I0620 19:11:17.882713 2646 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 19:11:17.916254 sudo[2661]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 19:11:17.917321 sudo[2661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 19:11:17.938761 kubelet[2646]: I0620 19:11:17.938634 2646 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:11:17.938761 kubelet[2646]: I0620 19:11:17.938703 2646 server.go:1289] "Started kubelet" Jun 20 19:11:17.946462 kubelet[2646]: I0620 19:11:17.946392 2646 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:11:17.948289 kubelet[2646]: I0620 19:11:17.946820 2646 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:11:17.948289 kubelet[2646]: I0620 19:11:17.946899 2646 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:11:17.948289 kubelet[2646]: I0620 19:11:17.948150 2646 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:11:17.949394 kubelet[2646]: I0620 19:11:17.949348 2646 server.go:317] "Adding debug handlers to kubelet server" Jun 20 19:11:17.955300 kubelet[2646]: I0620 19:11:17.955147 2646 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:11:17.956954 kubelet[2646]: I0620 19:11:17.956928 2646 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:11:17.960315 kubelet[2646]: I0620 19:11:17.959247 2646 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:11:17.960516 kubelet[2646]: I0620 19:11:17.960467 2646 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:11:17.966786 kubelet[2646]: I0620 19:11:17.963990 2646 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:11:17.969359 kubelet[2646]: I0620 19:11:17.967849 2646 factory.go:223] Registration of the containerd container factory successfully Jun 20 19:11:17.969359 kubelet[2646]: I0620 19:11:17.967873 2646 factory.go:223] Registration of the systemd container factory successfully Jun 20 19:11:17.977159 kubelet[2646]: E0620 19:11:17.977134 2646 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:11:18.029324 kubelet[2646]: I0620 19:11:18.028433 2646 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 19:11:18.037984 kubelet[2646]: I0620 19:11:18.037755 2646 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 19:11:18.037984 kubelet[2646]: I0620 19:11:18.037786 2646 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 19:11:18.038676 kubelet[2646]: I0620 19:11:18.037816 2646 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:11:18.038676 kubelet[2646]: I0620 19:11:18.038610 2646 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 19:11:18.039615 kubelet[2646]: E0620 19:11:18.039387 2646 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:11:18.132359 kubelet[2646]: I0620 19:11:18.131790 2646 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:11:18.132359 kubelet[2646]: I0620 19:11:18.131815 2646 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:11:18.132359 kubelet[2646]: I0620 19:11:18.131843 2646 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:11:18.132359 kubelet[2646]: I0620 19:11:18.132029 2646 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:11:18.132359 kubelet[2646]: I0620 19:11:18.132046 2646 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:11:18.132359 kubelet[2646]: I0620 19:11:18.132074 2646 policy_none.go:49] "None policy: Start" Jun 20 19:11:18.132359 kubelet[2646]: I0620 19:11:18.132092 2646 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:11:18.132359 kubelet[2646]: I0620 19:11:18.132108 2646 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:11:18.133079 kubelet[2646]: I0620 19:11:18.132961 2646 state_mem.go:75] "Updated machine memory state" Jun 20 19:11:18.140168 kubelet[2646]: E0620 19:11:18.140045 2646 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:11:18.140861 kubelet[2646]: E0620 19:11:18.140576 2646 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 19:11:18.140861 kubelet[2646]: I0620 19:11:18.140790 2646 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:11:18.141828 kubelet[2646]: I0620 19:11:18.140831 2646 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:11:18.144526 kubelet[2646]: I0620 19:11:18.143789 2646 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:11:18.158673 kubelet[2646]: E0620 19:11:18.158642 2646 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:11:18.268923 kubelet[2646]: I0620 19:11:18.268884 2646 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:18.280763 kubelet[2646]: I0620 19:11:18.280645 2646 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:18.280763 kubelet[2646]: I0620 19:11:18.280742 2646 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:18.341813 kubelet[2646]: I0620 19:11:18.341575 2646 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:18.343036 kubelet[2646]: I0620 19:11:18.342350 2646 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:18.343360 kubelet[2646]: I0620 19:11:18.342545 2646 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:18.355537 kubelet[2646]: I0620 19:11:18.355252 2646 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Jun 20 19:11:18.358918 kubelet[2646]: I0620 19:11:18.358892 2646 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Jun 20 19:11:18.362215 kubelet[2646]: I0620 19:11:18.361842 2646 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Jun 20 19:11:18.363846 kubelet[2646]: I0620 19:11:18.363672 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7ebf05b049163054f8514d5655bc8d6-kubeconfig\") pod \"kube-scheduler-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"e7ebf05b049163054f8514d5655bc8d6\") " pod="kube-system/kube-scheduler-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:18.365107 kubelet[2646]: I0620 19:11:18.364212 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82a72703a4fe51b383501f9398ebf963-k8s-certs\") pod \"kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"82a72703a4fe51b383501f9398ebf963\") " pod="kube-system/kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:18.365107 kubelet[2646]: I0620 19:11:18.364278 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82a72703a4fe51b383501f9398ebf963-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"82a72703a4fe51b383501f9398ebf963\") " pod="kube-system/kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:18.365107 kubelet[2646]: I0620 19:11:18.364313 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a99fb32fb9fe7012b5ca3865c463c13-ca-certs\") pod \"kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"9a99fb32fb9fe7012b5ca3865c463c13\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:18.365107 kubelet[2646]: I0620 19:11:18.364343 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a99fb32fb9fe7012b5ca3865c463c13-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"9a99fb32fb9fe7012b5ca3865c463c13\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:18.365390 kubelet[2646]: I0620 19:11:18.364421 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a99fb32fb9fe7012b5ca3865c463c13-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"9a99fb32fb9fe7012b5ca3865c463c13\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:18.365390 kubelet[2646]: I0620 19:11:18.364451 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a99fb32fb9fe7012b5ca3865c463c13-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"9a99fb32fb9fe7012b5ca3865c463c13\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:18.365390 kubelet[2646]: I0620 19:11:18.364480 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a99fb32fb9fe7012b5ca3865c463c13-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"9a99fb32fb9fe7012b5ca3865c463c13\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:18.365390 kubelet[2646]: I0620 19:11:18.364509 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82a72703a4fe51b383501f9398ebf963-ca-certs\") pod \"kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" (UID: \"82a72703a4fe51b383501f9398ebf963\") " pod="kube-system/kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:18.749152 sudo[2661]: pam_unix(sudo:session): session closed for user root Jun 20 19:11:18.873979 kubelet[2646]: I0620 19:11:18.873903 2646 apiserver.go:52] "Watching apiserver" Jun 20 19:11:18.960744 kubelet[2646]: I0620 19:11:18.960664 2646 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:11:19.078833 kubelet[2646]: I0620 19:11:19.076725 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" podStartSLOduration=1.076681422 podStartE2EDuration="1.076681422s" podCreationTimestamp="2025-06-20 19:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:11:19.052713482 +0000 UTC m=+1.281324018" watchObservedRunningTime="2025-06-20 19:11:19.076681422 +0000 UTC m=+1.305291959" Jun 20 19:11:19.092449 kubelet[2646]: I0620 19:11:19.092222 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" podStartSLOduration=1.092201168 podStartE2EDuration="1.092201168s" podCreationTimestamp="2025-06-20 19:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:11:19.079018607 +0000 UTC m=+1.307629145" watchObservedRunningTime="2025-06-20 19:11:19.092201168 +0000 UTC m=+1.320811694" Jun 20 19:11:19.110290 kubelet[2646]: I0620 19:11:19.108522 2646 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:19.115565 kubelet[2646]: I0620 19:11:19.115215 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" podStartSLOduration=1.115198937 podStartE2EDuration="1.115198937s" podCreationTimestamp="2025-06-20 19:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:11:19.093819374 +0000 UTC m=+1.322429900" watchObservedRunningTime="2025-06-20 19:11:19.115198937 +0000 UTC m=+1.343809473" Jun 20 19:11:19.126029 kubelet[2646]: I0620 19:11:19.125841 2646 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Jun 20 19:11:19.126029 kubelet[2646]: E0620 19:11:19.125955 2646 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" Jun 20 19:11:21.072870 sudo[1727]: pam_unix(sudo:session): session closed for user root Jun 20 19:11:21.116620 sshd[1726]: Connection closed by 147.75.109.163 port 34710 Jun 20 19:11:21.117532 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Jun 20 19:11:21.122861 systemd[1]: sshd@6-10.128.0.9:22-147.75.109.163:34710.service: Deactivated successfully. Jun 20 19:11:21.126614 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:11:21.126907 systemd[1]: session-7.scope: Consumed 8.804s CPU time, 267.5M memory peak. Jun 20 19:11:21.129767 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:11:21.131313 systemd-logind[1458]: Removed session 7. Jun 20 19:11:22.136530 kubelet[2646]: I0620 19:11:22.136459 2646 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:11:22.137154 kubelet[2646]: I0620 19:11:22.137066 2646 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:11:22.137226 containerd[1472]: time="2025-06-20T19:11:22.136841086Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:11:23.236129 systemd[1]: Created slice kubepods-besteffort-pod82d34adf_e347_4be0_9b6a_ad545fd0308d.slice - libcontainer container kubepods-besteffort-pod82d34adf_e347_4be0_9b6a_ad545fd0308d.slice. Jun 20 19:11:23.257602 systemd[1]: Created slice kubepods-burstable-podd81aeab4_73ff_406b_b857_719d4be7d85e.slice - libcontainer container kubepods-burstable-podd81aeab4_73ff_406b_b857_719d4be7d85e.slice. Jun 20 19:11:23.300546 kubelet[2646]: I0620 19:11:23.300504 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-hostproc\") pod \"cilium-cj2x9\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " pod="kube-system/cilium-cj2x9" Jun 20 19:11:23.302449 kubelet[2646]: I0620 19:11:23.301420 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-host-proc-sys-net\") pod \"cilium-cj2x9\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " pod="kube-system/cilium-cj2x9" Jun 20 19:11:23.302449 kubelet[2646]: I0620 19:11:23.302198 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/82d34adf-e347-4be0-9b6a-ad545fd0308d-kube-proxy\") pod \"kube-proxy-68l2t\" (UID: \"82d34adf-e347-4be0-9b6a-ad545fd0308d\") " pod="kube-system/kube-proxy-68l2t" Jun 20 19:11:23.302449 kubelet[2646]: I0620 19:11:23.302341 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwplh\" (UniqueName: \"kubernetes.io/projected/8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef-kube-api-access-jwplh\") pod \"cilium-operator-6c4d7847fc-h4zcg\" (UID: \"8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef\") " pod="kube-system/cilium-operator-6c4d7847fc-h4zcg" Jun 20 19:11:23.305024 kubelet[2646]: I0620 19:11:23.303344 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-etc-cni-netd\") pod \"cilium-cj2x9\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " pod="kube-system/cilium-cj2x9" Jun 20 19:11:23.305024 kubelet[2646]: I0620 19:11:23.303390 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-h4zcg\" (UID: \"8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef\") " pod="kube-system/cilium-operator-6c4d7847fc-h4zcg" Jun 20 19:11:23.305024 kubelet[2646]: I0620 19:11:23.303419 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-cni-path\") pod \"cilium-cj2x9\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " pod="kube-system/cilium-cj2x9" Jun 20 19:11:23.305024 kubelet[2646]: I0620 19:11:23.303447 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-lib-modules\") pod \"cilium-cj2x9\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " pod="kube-system/cilium-cj2x9" Jun 20 19:11:23.305024 kubelet[2646]: I0620 19:11:23.303474 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d81aeab4-73ff-406b-b857-719d4be7d85e-cilium-config-path\") pod \"cilium-cj2x9\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " pod="kube-system/cilium-cj2x9" Jun 20 19:11:23.305410 kubelet[2646]: I0620 19:11:23.303499 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-cilium-run\") pod \"cilium-cj2x9\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " pod="kube-system/cilium-cj2x9" Jun 20 19:11:23.305410 kubelet[2646]: I0620 19:11:23.303549 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-cilium-cgroup\") pod \"cilium-cj2x9\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " pod="kube-system/cilium-cj2x9" Jun 20 19:11:23.305410 kubelet[2646]: I0620 19:11:23.303577 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-bpf-maps\") pod \"cilium-cj2x9\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " pod="kube-system/cilium-cj2x9" Jun 20 19:11:23.305410 kubelet[2646]: I0620 19:11:23.303602 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-xtables-lock\") pod \"cilium-cj2x9\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " pod="kube-system/cilium-cj2x9" Jun 20 19:11:23.305410 kubelet[2646]: I0620 19:11:23.303629 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d81aeab4-73ff-406b-b857-719d4be7d85e-clustermesh-secrets\") pod \"cilium-cj2x9\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " pod="kube-system/cilium-cj2x9" Jun 20 19:11:23.305410 kubelet[2646]: I0620 19:11:23.303657 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-host-proc-sys-kernel\") pod \"cilium-cj2x9\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " pod="kube-system/cilium-cj2x9" Jun 20 19:11:23.305922 kubelet[2646]: I0620 19:11:23.303683 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82d34adf-e347-4be0-9b6a-ad545fd0308d-lib-modules\") pod \"kube-proxy-68l2t\" (UID: \"82d34adf-e347-4be0-9b6a-ad545fd0308d\") " pod="kube-system/kube-proxy-68l2t" Jun 20 19:11:23.305922 kubelet[2646]: I0620 19:11:23.303708 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d81aeab4-73ff-406b-b857-719d4be7d85e-hubble-tls\") pod \"cilium-cj2x9\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " pod="kube-system/cilium-cj2x9" Jun 20 19:11:23.306837 kubelet[2646]: I0620 19:11:23.304253 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nst2f\" (UniqueName: \"kubernetes.io/projected/d81aeab4-73ff-406b-b857-719d4be7d85e-kube-api-access-nst2f\") pod \"cilium-cj2x9\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " pod="kube-system/cilium-cj2x9" Jun 20 19:11:23.307005 kubelet[2646]: I0620 19:11:23.306985 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82d34adf-e347-4be0-9b6a-ad545fd0308d-xtables-lock\") pod \"kube-proxy-68l2t\" (UID: \"82d34adf-e347-4be0-9b6a-ad545fd0308d\") " pod="kube-system/kube-proxy-68l2t" Jun 20 19:11:23.307634 kubelet[2646]: I0620 19:11:23.307105 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56qtq\" (UniqueName: \"kubernetes.io/projected/82d34adf-e347-4be0-9b6a-ad545fd0308d-kube-api-access-56qtq\") pod \"kube-proxy-68l2t\" (UID: \"82d34adf-e347-4be0-9b6a-ad545fd0308d\") " pod="kube-system/kube-proxy-68l2t" Jun 20 19:11:23.307618 systemd[1]: Created slice kubepods-besteffort-pod8e56f8cd_6ce0_4a0a_8da1_cbcfbdc797ef.slice - libcontainer container kubepods-besteffort-pod8e56f8cd_6ce0_4a0a_8da1_cbcfbdc797ef.slice. Jun 20 19:11:23.549288 containerd[1472]: time="2025-06-20T19:11:23.549097904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-68l2t,Uid:82d34adf-e347-4be0-9b6a-ad545fd0308d,Namespace:kube-system,Attempt:0,}" Jun 20 19:11:23.568566 containerd[1472]: time="2025-06-20T19:11:23.566355554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cj2x9,Uid:d81aeab4-73ff-406b-b857-719d4be7d85e,Namespace:kube-system,Attempt:0,}" Jun 20 19:11:23.593680 containerd[1472]: time="2025-06-20T19:11:23.593505077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:23.593997 containerd[1472]: time="2025-06-20T19:11:23.593713526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:23.593997 containerd[1472]: time="2025-06-20T19:11:23.593744193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:23.595970 containerd[1472]: time="2025-06-20T19:11:23.595828872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:23.615443 containerd[1472]: time="2025-06-20T19:11:23.613139989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:23.615443 containerd[1472]: time="2025-06-20T19:11:23.614597374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:23.615443 containerd[1472]: time="2025-06-20T19:11:23.614620429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:23.615443 containerd[1472]: time="2025-06-20T19:11:23.614738652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:23.616560 containerd[1472]: time="2025-06-20T19:11:23.616244573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h4zcg,Uid:8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef,Namespace:kube-system,Attempt:0,}" Jun 20 19:11:23.642519 systemd[1]: Started cri-containerd-a3a76c15d85e568a7cf83ae81899ac48d477e99433979bc53b666cb0c6350f68.scope - libcontainer container a3a76c15d85e568a7cf83ae81899ac48d477e99433979bc53b666cb0c6350f68. Jun 20 19:11:23.665885 systemd[1]: Started cri-containerd-e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4.scope - libcontainer container e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4. Jun 20 19:11:23.686788 containerd[1472]: time="2025-06-20T19:11:23.684112407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:23.686788 containerd[1472]: time="2025-06-20T19:11:23.686514381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:23.686788 containerd[1472]: time="2025-06-20T19:11:23.686558995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:23.686788 containerd[1472]: time="2025-06-20T19:11:23.686690204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:23.732567 systemd[1]: Started cri-containerd-93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4.scope - libcontainer container 93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4. Jun 20 19:11:23.740639 containerd[1472]: time="2025-06-20T19:11:23.739858256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cj2x9,Uid:d81aeab4-73ff-406b-b857-719d4be7d85e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\"" Jun 20 19:11:23.741709 containerd[1472]: time="2025-06-20T19:11:23.741644671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-68l2t,Uid:82d34adf-e347-4be0-9b6a-ad545fd0308d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3a76c15d85e568a7cf83ae81899ac48d477e99433979bc53b666cb0c6350f68\"" Jun 20 19:11:23.748236 containerd[1472]: time="2025-06-20T19:11:23.748192331Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 19:11:23.753504 containerd[1472]: time="2025-06-20T19:11:23.753466146Z" level=info msg="CreateContainer within sandbox \"a3a76c15d85e568a7cf83ae81899ac48d477e99433979bc53b666cb0c6350f68\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:11:23.777062 containerd[1472]: time="2025-06-20T19:11:23.776916910Z" level=info msg="CreateContainer within sandbox \"a3a76c15d85e568a7cf83ae81899ac48d477e99433979bc53b666cb0c6350f68\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2f6a4b02cfffd439392dc5c80beb7542bbe0a2558a23c9e2748c86bcff16941e\"" Jun 20 19:11:23.783252 containerd[1472]: time="2025-06-20T19:11:23.779205554Z" level=info msg="StartContainer for \"2f6a4b02cfffd439392dc5c80beb7542bbe0a2558a23c9e2748c86bcff16941e\"" Jun 20 19:11:23.824667 containerd[1472]: time="2025-06-20T19:11:23.824442717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h4zcg,Uid:8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4\"" Jun 20 19:11:23.844524 systemd[1]: Started cri-containerd-2f6a4b02cfffd439392dc5c80beb7542bbe0a2558a23c9e2748c86bcff16941e.scope - libcontainer container 2f6a4b02cfffd439392dc5c80beb7542bbe0a2558a23c9e2748c86bcff16941e. Jun 20 19:11:23.886639 containerd[1472]: time="2025-06-20T19:11:23.886498436Z" level=info msg="StartContainer for \"2f6a4b02cfffd439392dc5c80beb7542bbe0a2558a23c9e2748c86bcff16941e\" returns successfully" Jun 20 19:11:24.136953 kubelet[2646]: I0620 19:11:24.136225 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-68l2t" podStartSLOduration=1.136191881 podStartE2EDuration="1.136191881s" podCreationTimestamp="2025-06-20 19:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:11:24.135674258 +0000 UTC m=+6.364284793" watchObservedRunningTime="2025-06-20 19:11:24.136191881 +0000 UTC m=+6.364802416" Jun 20 19:11:26.437420 update_engine[1463]: I20250620 19:11:26.437343 1463 update_attempter.cc:509] Updating boot flags... Jun 20 19:11:26.541338 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3031) Jun 20 19:11:26.741473 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3035) Jun 20 19:11:29.373818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2448310796.mount: Deactivated successfully. Jun 20 19:11:32.137565 containerd[1472]: time="2025-06-20T19:11:32.137492236Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:32.139036 containerd[1472]: time="2025-06-20T19:11:32.138931259Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 19:11:32.140212 containerd[1472]: time="2025-06-20T19:11:32.140079026Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:32.142306 containerd[1472]: time="2025-06-20T19:11:32.142232282Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.393676728s" Jun 20 19:11:32.143277 containerd[1472]: time="2025-06-20T19:11:32.142307789Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 19:11:32.144600 containerd[1472]: time="2025-06-20T19:11:32.144015239Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 19:11:32.147903 containerd[1472]: time="2025-06-20T19:11:32.147855647Z" level=info msg="CreateContainer within sandbox \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:11:32.170055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1949045610.mount: Deactivated successfully. Jun 20 19:11:32.176100 containerd[1472]: time="2025-06-20T19:11:32.175825163Z" level=info msg="CreateContainer within sandbox \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165\"" Jun 20 19:11:32.178638 containerd[1472]: time="2025-06-20T19:11:32.178309725Z" level=info msg="StartContainer for \"fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165\"" Jun 20 19:11:32.228473 systemd[1]: Started cri-containerd-fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165.scope - libcontainer container fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165. Jun 20 19:11:32.267038 containerd[1472]: time="2025-06-20T19:11:32.266987469Z" level=info msg="StartContainer for \"fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165\" returns successfully" Jun 20 19:11:32.282244 systemd[1]: cri-containerd-fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165.scope: Deactivated successfully. Jun 20 19:11:33.159922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165-rootfs.mount: Deactivated successfully. Jun 20 19:11:34.123243 containerd[1472]: time="2025-06-20T19:11:34.123170129Z" level=info msg="shim disconnected" id=fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165 namespace=k8s.io Jun 20 19:11:34.123243 containerd[1472]: time="2025-06-20T19:11:34.123236084Z" level=warning msg="cleaning up after shim disconnected" id=fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165 namespace=k8s.io Jun 20 19:11:34.123243 containerd[1472]: time="2025-06-20T19:11:34.123248981Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:11:34.219668 containerd[1472]: time="2025-06-20T19:11:34.219458085Z" level=info msg="CreateContainer within sandbox \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:11:34.240446 containerd[1472]: time="2025-06-20T19:11:34.240361473Z" level=info msg="CreateContainer within sandbox \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b\"" Jun 20 19:11:34.242586 containerd[1472]: time="2025-06-20T19:11:34.242502990Z" level=info msg="StartContainer for \"efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b\"" Jun 20 19:11:34.300473 systemd[1]: Started cri-containerd-efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b.scope - libcontainer container efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b. Jun 20 19:11:34.340191 containerd[1472]: time="2025-06-20T19:11:34.340109253Z" level=info msg="StartContainer for \"efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b\" returns successfully" Jun 20 19:11:34.366136 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:11:34.366627 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:11:34.367770 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:11:34.380579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:11:34.386216 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:11:34.387031 systemd[1]: cri-containerd-efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b.scope: Deactivated successfully. Jun 20 19:11:34.420035 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:11:34.427952 containerd[1472]: time="2025-06-20T19:11:34.427829086Z" level=info msg="shim disconnected" id=efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b namespace=k8s.io Jun 20 19:11:34.427952 containerd[1472]: time="2025-06-20T19:11:34.427906047Z" level=warning msg="cleaning up after shim disconnected" id=efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b namespace=k8s.io Jun 20 19:11:34.427952 containerd[1472]: time="2025-06-20T19:11:34.427924474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:11:35.229946 containerd[1472]: time="2025-06-20T19:11:35.229552169Z" level=info msg="CreateContainer within sandbox \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:11:35.239689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b-rootfs.mount: Deactivated successfully. Jun 20 19:11:35.267281 containerd[1472]: time="2025-06-20T19:11:35.267073383Z" level=info msg="CreateContainer within sandbox \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146\"" Jun 20 19:11:35.268660 containerd[1472]: time="2025-06-20T19:11:35.267880529Z" level=info msg="StartContainer for \"57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146\"" Jun 20 19:11:35.343544 systemd[1]: Started cri-containerd-57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146.scope - libcontainer container 57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146. Jun 20 19:11:35.416589 containerd[1472]: time="2025-06-20T19:11:35.416522683Z" level=info msg="StartContainer for \"57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146\" returns successfully" Jun 20 19:11:35.423433 systemd[1]: cri-containerd-57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146.scope: Deactivated successfully. Jun 20 19:11:35.647295 containerd[1472]: time="2025-06-20T19:11:35.647117369Z" level=info msg="shim disconnected" id=57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146 namespace=k8s.io Jun 20 19:11:35.647295 containerd[1472]: time="2025-06-20T19:11:35.647188077Z" level=warning msg="cleaning up after shim disconnected" id=57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146 namespace=k8s.io Jun 20 19:11:35.647295 containerd[1472]: time="2025-06-20T19:11:35.647202044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:11:35.691113 containerd[1472]: time="2025-06-20T19:11:35.691050206Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:35.692282 containerd[1472]: time="2025-06-20T19:11:35.692194013Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 19:11:35.693850 containerd[1472]: time="2025-06-20T19:11:35.693792771Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:35.695697 containerd[1472]: time="2025-06-20T19:11:35.695305854Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.551249896s" Jun 20 19:11:35.695697 containerd[1472]: time="2025-06-20T19:11:35.695353291Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 19:11:35.700747 containerd[1472]: time="2025-06-20T19:11:35.700711878Z" level=info msg="CreateContainer within sandbox \"93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 19:11:35.718069 containerd[1472]: time="2025-06-20T19:11:35.718030114Z" level=info msg="CreateContainer within sandbox \"93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a\"" Jun 20 19:11:35.718724 containerd[1472]: time="2025-06-20T19:11:35.718696318Z" level=info msg="StartContainer for \"8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a\"" Jun 20 19:11:35.756489 systemd[1]: Started cri-containerd-8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a.scope - libcontainer container 8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a. Jun 20 19:11:35.795547 containerd[1472]: time="2025-06-20T19:11:35.795375549Z" level=info msg="StartContainer for \"8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a\" returns successfully" Jun 20 19:11:36.240380 containerd[1472]: time="2025-06-20T19:11:36.240277093Z" level=info msg="CreateContainer within sandbox \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:11:36.250117 systemd[1]: run-containerd-runc-k8s.io-57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146-runc.5cZc8b.mount: Deactivated successfully. Jun 20 19:11:36.250306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146-rootfs.mount: Deactivated successfully. Jun 20 19:11:36.276777 containerd[1472]: time="2025-06-20T19:11:36.276622494Z" level=info msg="CreateContainer within sandbox \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f\"" Jun 20 19:11:36.278290 containerd[1472]: time="2025-06-20T19:11:36.277626159Z" level=info msg="StartContainer for \"d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f\"" Jun 20 19:11:36.374685 systemd[1]: Started cri-containerd-d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f.scope - libcontainer container d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f. Jun 20 19:11:36.497031 containerd[1472]: time="2025-06-20T19:11:36.495738297Z" level=info msg="StartContainer for \"d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f\" returns successfully" Jun 20 19:11:36.500053 systemd[1]: cri-containerd-d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f.scope: Deactivated successfully. Jun 20 19:11:36.545949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f-rootfs.mount: Deactivated successfully. Jun 20 19:11:36.552559 containerd[1472]: time="2025-06-20T19:11:36.552476167Z" level=info msg="shim disconnected" id=d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f namespace=k8s.io Jun 20 19:11:36.552559 containerd[1472]: time="2025-06-20T19:11:36.552554868Z" level=warning msg="cleaning up after shim disconnected" id=d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f namespace=k8s.io Jun 20 19:11:36.552828 containerd[1472]: time="2025-06-20T19:11:36.552571149Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:11:36.607290 kubelet[2646]: I0620 19:11:36.605715 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-h4zcg" podStartSLOduration=1.736573846 podStartE2EDuration="13.605687613s" podCreationTimestamp="2025-06-20 19:11:23 +0000 UTC" firstStartedPulling="2025-06-20 19:11:23.827192153 +0000 UTC m=+6.055802682" lastFinishedPulling="2025-06-20 19:11:35.696305924 +0000 UTC m=+17.924916449" observedRunningTime="2025-06-20 19:11:36.362215326 +0000 UTC m=+18.590825865" watchObservedRunningTime="2025-06-20 19:11:36.605687613 +0000 UTC m=+18.834298150" Jun 20 19:11:37.248301 containerd[1472]: time="2025-06-20T19:11:37.247481006Z" level=info msg="CreateContainer within sandbox \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:11:37.293468 containerd[1472]: time="2025-06-20T19:11:37.292558755Z" level=info msg="CreateContainer within sandbox \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2\"" Jun 20 19:11:37.294489 containerd[1472]: time="2025-06-20T19:11:37.294433839Z" level=info msg="StartContainer for \"bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2\"" Jun 20 19:11:37.364522 systemd[1]: Started cri-containerd-bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2.scope - libcontainer container bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2. Jun 20 19:11:37.411598 containerd[1472]: time="2025-06-20T19:11:37.411532377Z" level=info msg="StartContainer for \"bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2\" returns successfully" Jun 20 19:11:37.565236 kubelet[2646]: I0620 19:11:37.565115 2646 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:11:37.632318 systemd[1]: Created slice kubepods-burstable-pod02e03cf8_b902_4a0a_b0a8_4213be477f39.slice - libcontainer container kubepods-burstable-pod02e03cf8_b902_4a0a_b0a8_4213be477f39.slice. Jun 20 19:11:37.643567 systemd[1]: Created slice kubepods-burstable-pod65272fa2_ef58_47d0_b32d_c81fd64b1ada.slice - libcontainer container kubepods-burstable-pod65272fa2_ef58_47d0_b32d_c81fd64b1ada.slice. Jun 20 19:11:37.722410 kubelet[2646]: I0620 19:11:37.722138 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65272fa2-ef58-47d0-b32d-c81fd64b1ada-config-volume\") pod \"coredns-674b8bbfcf-86hxc\" (UID: \"65272fa2-ef58-47d0-b32d-c81fd64b1ada\") " pod="kube-system/coredns-674b8bbfcf-86hxc" Jun 20 19:11:37.722410 kubelet[2646]: I0620 19:11:37.722207 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02e03cf8-b902-4a0a-b0a8-4213be477f39-config-volume\") pod \"coredns-674b8bbfcf-fbq6j\" (UID: \"02e03cf8-b902-4a0a-b0a8-4213be477f39\") " pod="kube-system/coredns-674b8bbfcf-fbq6j" Jun 20 19:11:37.722410 kubelet[2646]: I0620 19:11:37.722245 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv5fq\" (UniqueName: \"kubernetes.io/projected/65272fa2-ef58-47d0-b32d-c81fd64b1ada-kube-api-access-zv5fq\") pod \"coredns-674b8bbfcf-86hxc\" (UID: \"65272fa2-ef58-47d0-b32d-c81fd64b1ada\") " pod="kube-system/coredns-674b8bbfcf-86hxc" Jun 20 19:11:37.722410 kubelet[2646]: I0620 19:11:37.722294 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmwfv\" (UniqueName: \"kubernetes.io/projected/02e03cf8-b902-4a0a-b0a8-4213be477f39-kube-api-access-wmwfv\") pod \"coredns-674b8bbfcf-fbq6j\" (UID: \"02e03cf8-b902-4a0a-b0a8-4213be477f39\") " pod="kube-system/coredns-674b8bbfcf-fbq6j" Jun 20 19:11:37.939972 containerd[1472]: time="2025-06-20T19:11:37.939828620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fbq6j,Uid:02e03cf8-b902-4a0a-b0a8-4213be477f39,Namespace:kube-system,Attempt:0,}" Jun 20 19:11:37.950053 containerd[1472]: time="2025-06-20T19:11:37.949992414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-86hxc,Uid:65272fa2-ef58-47d0-b32d-c81fd64b1ada,Namespace:kube-system,Attempt:0,}" Jun 20 19:11:39.989191 systemd-networkd[1381]: cilium_host: Link UP Jun 20 19:11:39.989586 systemd-networkd[1381]: cilium_net: Link UP Jun 20 19:11:39.991694 systemd-networkd[1381]: cilium_net: Gained carrier Jun 20 19:11:39.992056 systemd-networkd[1381]: cilium_host: Gained carrier Jun 20 19:11:40.143609 systemd-networkd[1381]: cilium_vxlan: Link UP Jun 20 19:11:40.143627 systemd-networkd[1381]: cilium_vxlan: Gained carrier Jun 20 19:11:40.224559 systemd-networkd[1381]: cilium_net: Gained IPv6LL Jun 20 19:11:40.224980 systemd-networkd[1381]: cilium_host: Gained IPv6LL Jun 20 19:11:40.425421 kernel: NET: Registered PF_ALG protocol family Jun 20 19:11:41.285195 systemd-networkd[1381]: lxc_health: Link UP Jun 20 19:11:41.291674 systemd-networkd[1381]: lxc_health: Gained carrier Jun 20 19:11:41.529855 systemd-networkd[1381]: cilium_vxlan: Gained IPv6LL Jun 20 19:11:41.539630 systemd-networkd[1381]: lxce3061aef307c: Link UP Jun 20 19:11:41.555331 kernel: eth0: renamed from tmp656a0 Jun 20 19:11:41.571852 systemd-networkd[1381]: lxc76f4405abc4f: Link UP Jun 20 19:11:41.579566 systemd-networkd[1381]: lxce3061aef307c: Gained carrier Jun 20 19:11:41.598786 kernel: eth0: renamed from tmpa6999 Jun 20 19:11:41.605652 systemd-networkd[1381]: lxc76f4405abc4f: Gained carrier Jun 20 19:11:41.638105 kubelet[2646]: I0620 19:11:41.637743 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cj2x9" podStartSLOduration=10.23864959 podStartE2EDuration="18.637719361s" podCreationTimestamp="2025-06-20 19:11:23 +0000 UTC" firstStartedPulling="2025-06-20 19:11:23.744676327 +0000 UTC m=+5.973286867" lastFinishedPulling="2025-06-20 19:11:32.143746111 +0000 UTC m=+14.372356638" observedRunningTime="2025-06-20 19:11:38.293735916 +0000 UTC m=+20.522346464" watchObservedRunningTime="2025-06-20 19:11:41.637719361 +0000 UTC m=+23.866329896" Jun 20 19:11:42.617541 systemd-networkd[1381]: lxc_health: Gained IPv6LL Jun 20 19:11:42.680916 systemd-networkd[1381]: lxc76f4405abc4f: Gained IPv6LL Jun 20 19:11:42.872441 systemd-networkd[1381]: lxce3061aef307c: Gained IPv6LL Jun 20 19:11:45.629564 ntpd[1440]: Listen normally on 7 cilium_host 192.168.0.179:123 Jun 20 19:11:45.630767 ntpd[1440]: 20 Jun 19:11:45 ntpd[1440]: Listen normally on 7 cilium_host 192.168.0.179:123 Jun 20 19:11:45.630767 ntpd[1440]: 20 Jun 19:11:45 ntpd[1440]: Listen normally on 8 cilium_net [fe80::185a:3fff:fee6:9b04%4]:123 Jun 20 19:11:45.630767 ntpd[1440]: 20 Jun 19:11:45 ntpd[1440]: Listen normally on 9 cilium_host [fe80::7035:b5ff:fec2:e354%5]:123 Jun 20 19:11:45.630767 ntpd[1440]: 20 Jun 19:11:45 ntpd[1440]: Listen normally on 10 cilium_vxlan [fe80::4ca3:e0ff:fe7a:7519%6]:123 Jun 20 19:11:45.630767 ntpd[1440]: 20 Jun 19:11:45 ntpd[1440]: Listen normally on 11 lxc_health [fe80::e805:53ff:fe28:1922%8]:123 Jun 20 19:11:45.630767 ntpd[1440]: 20 Jun 19:11:45 ntpd[1440]: Listen normally on 12 lxce3061aef307c [fe80::bcb5:6ff:fe56:48eb%10]:123 Jun 20 19:11:45.630767 ntpd[1440]: 20 Jun 19:11:45 ntpd[1440]: Listen normally on 13 lxc76f4405abc4f [fe80::ac07:4cff:fe77:b616%12]:123 Jun 20 19:11:45.629695 ntpd[1440]: Listen normally on 8 cilium_net [fe80::185a:3fff:fee6:9b04%4]:123 Jun 20 19:11:45.629777 ntpd[1440]: Listen normally on 9 cilium_host [fe80::7035:b5ff:fec2:e354%5]:123 Jun 20 19:11:45.629840 ntpd[1440]: Listen normally on 10 cilium_vxlan [fe80::4ca3:e0ff:fe7a:7519%6]:123 Jun 20 19:11:45.629912 ntpd[1440]: Listen normally on 11 lxc_health [fe80::e805:53ff:fe28:1922%8]:123 Jun 20 19:11:45.629971 ntpd[1440]: Listen normally on 12 lxce3061aef307c [fe80::bcb5:6ff:fe56:48eb%10]:123 Jun 20 19:11:45.630030 ntpd[1440]: Listen normally on 13 lxc76f4405abc4f [fe80::ac07:4cff:fe77:b616%12]:123 Jun 20 19:11:46.707361 containerd[1472]: time="2025-06-20T19:11:46.706678421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:46.707361 containerd[1472]: time="2025-06-20T19:11:46.706764752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:46.707361 containerd[1472]: time="2025-06-20T19:11:46.706794293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:46.707361 containerd[1472]: time="2025-06-20T19:11:46.706912801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:46.740342 containerd[1472]: time="2025-06-20T19:11:46.735980406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:46.740342 containerd[1472]: time="2025-06-20T19:11:46.736076652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:46.740342 containerd[1472]: time="2025-06-20T19:11:46.736100464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:46.740342 containerd[1472]: time="2025-06-20T19:11:46.736246754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:46.778513 systemd[1]: Started cri-containerd-a69990ae194a14c7974ab3126491f02a55db93abc7fda512625c4953426dff96.scope - libcontainer container a69990ae194a14c7974ab3126491f02a55db93abc7fda512625c4953426dff96. Jun 20 19:11:46.797528 systemd[1]: Started cri-containerd-656a0aca49d2b42cd6b9ac4b1fe709845b3395cb936ab272f8f8e77661b3ce89.scope - libcontainer container 656a0aca49d2b42cd6b9ac4b1fe709845b3395cb936ab272f8f8e77661b3ce89. Jun 20 19:11:46.910544 containerd[1472]: time="2025-06-20T19:11:46.910477647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-86hxc,Uid:65272fa2-ef58-47d0-b32d-c81fd64b1ada,Namespace:kube-system,Attempt:0,} returns sandbox id \"a69990ae194a14c7974ab3126491f02a55db93abc7fda512625c4953426dff96\"" Jun 20 19:11:46.920596 containerd[1472]: time="2025-06-20T19:11:46.920540166Z" level=info msg="CreateContainer within sandbox \"a69990ae194a14c7974ab3126491f02a55db93abc7fda512625c4953426dff96\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:11:46.933835 containerd[1472]: time="2025-06-20T19:11:46.933749487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fbq6j,Uid:02e03cf8-b902-4a0a-b0a8-4213be477f39,Namespace:kube-system,Attempt:0,} returns sandbox id \"656a0aca49d2b42cd6b9ac4b1fe709845b3395cb936ab272f8f8e77661b3ce89\"" Jun 20 19:11:46.949148 containerd[1472]: time="2025-06-20T19:11:46.948925590Z" level=info msg="CreateContainer within sandbox \"656a0aca49d2b42cd6b9ac4b1fe709845b3395cb936ab272f8f8e77661b3ce89\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:11:46.953034 containerd[1472]: time="2025-06-20T19:11:46.952975684Z" level=info msg="CreateContainer within sandbox \"a69990ae194a14c7974ab3126491f02a55db93abc7fda512625c4953426dff96\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"931132c3720777da15e0e97946bfb73f1d1d7770361c37cd43ad5fc19a6d1fd5\"" Jun 20 19:11:46.954594 containerd[1472]: time="2025-06-20T19:11:46.954455570Z" level=info msg="StartContainer for \"931132c3720777da15e0e97946bfb73f1d1d7770361c37cd43ad5fc19a6d1fd5\"" Jun 20 19:11:46.975879 containerd[1472]: time="2025-06-20T19:11:46.975788516Z" level=info msg="CreateContainer within sandbox \"656a0aca49d2b42cd6b9ac4b1fe709845b3395cb936ab272f8f8e77661b3ce89\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"64cd8e11ea8d6d366c3cb8ad0420e339b6b4428dfd011c8f3895aa13d854fade\"" Jun 20 19:11:46.982116 containerd[1472]: time="2025-06-20T19:11:46.979844795Z" level=info msg="StartContainer for \"64cd8e11ea8d6d366c3cb8ad0420e339b6b4428dfd011c8f3895aa13d854fade\"" Jun 20 19:11:47.020672 systemd[1]: Started cri-containerd-931132c3720777da15e0e97946bfb73f1d1d7770361c37cd43ad5fc19a6d1fd5.scope - libcontainer container 931132c3720777da15e0e97946bfb73f1d1d7770361c37cd43ad5fc19a6d1fd5. Jun 20 19:11:47.082636 systemd[1]: Started cri-containerd-64cd8e11ea8d6d366c3cb8ad0420e339b6b4428dfd011c8f3895aa13d854fade.scope - libcontainer container 64cd8e11ea8d6d366c3cb8ad0420e339b6b4428dfd011c8f3895aa13d854fade. Jun 20 19:11:47.097514 containerd[1472]: time="2025-06-20T19:11:47.097467445Z" level=info msg="StartContainer for \"931132c3720777da15e0e97946bfb73f1d1d7770361c37cd43ad5fc19a6d1fd5\" returns successfully" Jun 20 19:11:47.139310 containerd[1472]: time="2025-06-20T19:11:47.138287250Z" level=info msg="StartContainer for \"64cd8e11ea8d6d366c3cb8ad0420e339b6b4428dfd011c8f3895aa13d854fade\" returns successfully" Jun 20 19:11:47.306564 kubelet[2646]: I0620 19:11:47.306363 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-86hxc" podStartSLOduration=24.30633977 podStartE2EDuration="24.30633977s" podCreationTimestamp="2025-06-20 19:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:11:47.305570524 +0000 UTC m=+29.534181070" watchObservedRunningTime="2025-06-20 19:11:47.30633977 +0000 UTC m=+29.534950307" Jun 20 19:11:47.337140 kubelet[2646]: I0620 19:11:47.336845 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fbq6j" podStartSLOduration=24.336812307 podStartE2EDuration="24.336812307s" podCreationTimestamp="2025-06-20 19:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:11:47.331309413 +0000 UTC m=+29.559919947" watchObservedRunningTime="2025-06-20 19:11:47.336812307 +0000 UTC m=+29.565422849" Jun 20 19:11:47.717494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount21791133.mount: Deactivated successfully. Jun 20 19:12:19.230696 systemd[1]: Started sshd@7-10.128.0.9:22-147.75.109.163:60562.service - OpenSSH per-connection server daemon (147.75.109.163:60562). Jun 20 19:12:19.531058 sshd[4042]: Accepted publickey for core from 147.75.109.163 port 60562 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:19.532801 sshd-session[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:19.540358 systemd-logind[1458]: New session 8 of user core. Jun 20 19:12:19.547508 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:12:19.851478 sshd[4044]: Connection closed by 147.75.109.163 port 60562 Jun 20 19:12:19.852440 sshd-session[4042]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:19.857518 systemd[1]: sshd@7-10.128.0.9:22-147.75.109.163:60562.service: Deactivated successfully. Jun 20 19:12:19.860852 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:12:19.863066 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:12:19.864729 systemd-logind[1458]: Removed session 8. Jun 20 19:12:24.914711 systemd[1]: Started sshd@8-10.128.0.9:22-147.75.109.163:60576.service - OpenSSH per-connection server daemon (147.75.109.163:60576). Jun 20 19:12:25.203785 sshd[4059]: Accepted publickey for core from 147.75.109.163 port 60576 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:25.205609 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:25.212661 systemd-logind[1458]: New session 9 of user core. Jun 20 19:12:25.220473 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:12:25.501960 sshd[4061]: Connection closed by 147.75.109.163 port 60576 Jun 20 19:12:25.502871 sshd-session[4059]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:25.508509 systemd[1]: sshd@8-10.128.0.9:22-147.75.109.163:60576.service: Deactivated successfully. Jun 20 19:12:25.512074 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:12:25.513398 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:12:25.514967 systemd-logind[1458]: Removed session 9. Jun 20 19:12:30.560674 systemd[1]: Started sshd@9-10.128.0.9:22-147.75.109.163:46978.service - OpenSSH per-connection server daemon (147.75.109.163:46978). Jun 20 19:12:30.852959 sshd[4074]: Accepted publickey for core from 147.75.109.163 port 46978 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:30.855327 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:30.862556 systemd-logind[1458]: New session 10 of user core. Jun 20 19:12:30.868483 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:12:31.147802 sshd[4076]: Connection closed by 147.75.109.163 port 46978 Jun 20 19:12:31.149051 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:31.153634 systemd[1]: sshd@9-10.128.0.9:22-147.75.109.163:46978.service: Deactivated successfully. Jun 20 19:12:31.156663 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:12:31.158752 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:12:31.160738 systemd-logind[1458]: Removed session 10. Jun 20 19:12:36.206081 systemd[1]: Started sshd@10-10.128.0.9:22-147.75.109.163:33878.service - OpenSSH per-connection server daemon (147.75.109.163:33878). Jun 20 19:12:36.503029 sshd[4089]: Accepted publickey for core from 147.75.109.163 port 33878 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:36.504998 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:36.512011 systemd-logind[1458]: New session 11 of user core. Jun 20 19:12:36.515546 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:12:36.791790 sshd[4091]: Connection closed by 147.75.109.163 port 33878 Jun 20 19:12:36.793035 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:36.798194 systemd[1]: sshd@10-10.128.0.9:22-147.75.109.163:33878.service: Deactivated successfully. Jun 20 19:12:36.801101 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:12:36.802303 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:12:36.803952 systemd-logind[1458]: Removed session 11. Jun 20 19:12:36.848675 systemd[1]: Started sshd@11-10.128.0.9:22-147.75.109.163:33880.service - OpenSSH per-connection server daemon (147.75.109.163:33880). Jun 20 19:12:37.136814 sshd[4103]: Accepted publickey for core from 147.75.109.163 port 33880 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:37.138882 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:37.146318 systemd-logind[1458]: New session 12 of user core. Jun 20 19:12:37.157527 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:12:37.475695 sshd[4105]: Connection closed by 147.75.109.163 port 33880 Jun 20 19:12:37.476671 sshd-session[4103]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:37.482048 systemd[1]: sshd@11-10.128.0.9:22-147.75.109.163:33880.service: Deactivated successfully. Jun 20 19:12:37.484762 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:12:37.485935 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:12:37.487468 systemd-logind[1458]: Removed session 12. Jun 20 19:12:37.541054 systemd[1]: Started sshd@12-10.128.0.9:22-147.75.109.163:33882.service - OpenSSH per-connection server daemon (147.75.109.163:33882). Jun 20 19:12:37.841149 sshd[4115]: Accepted publickey for core from 147.75.109.163 port 33882 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:37.843177 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:37.849149 systemd-logind[1458]: New session 13 of user core. Jun 20 19:12:37.857466 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:12:38.140678 sshd[4117]: Connection closed by 147.75.109.163 port 33882 Jun 20 19:12:38.141544 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:38.146956 systemd[1]: sshd@12-10.128.0.9:22-147.75.109.163:33882.service: Deactivated successfully. Jun 20 19:12:38.149780 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:12:38.150929 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:12:38.152434 systemd-logind[1458]: Removed session 13. Jun 20 19:12:43.202889 systemd[1]: Started sshd@13-10.128.0.9:22-147.75.109.163:33894.service - OpenSSH per-connection server daemon (147.75.109.163:33894). Jun 20 19:12:43.494132 sshd[4130]: Accepted publickey for core from 147.75.109.163 port 33894 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:43.495959 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:43.503537 systemd-logind[1458]: New session 14 of user core. Jun 20 19:12:43.509490 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:12:43.788405 sshd[4132]: Connection closed by 147.75.109.163 port 33894 Jun 20 19:12:43.789655 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:43.793999 systemd[1]: sshd@13-10.128.0.9:22-147.75.109.163:33894.service: Deactivated successfully. Jun 20 19:12:43.797086 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:12:43.799212 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:12:43.800959 systemd-logind[1458]: Removed session 14. Jun 20 19:12:48.847654 systemd[1]: Started sshd@14-10.128.0.9:22-147.75.109.163:53210.service - OpenSSH per-connection server daemon (147.75.109.163:53210). Jun 20 19:12:49.145832 sshd[4145]: Accepted publickey for core from 147.75.109.163 port 53210 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:49.147780 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:49.154440 systemd-logind[1458]: New session 15 of user core. Jun 20 19:12:49.160521 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:12:49.445445 sshd[4147]: Connection closed by 147.75.109.163 port 53210 Jun 20 19:12:49.446713 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:49.451606 systemd[1]: sshd@14-10.128.0.9:22-147.75.109.163:53210.service: Deactivated successfully. Jun 20 19:12:49.454646 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:12:49.456932 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:12:49.458618 systemd-logind[1458]: Removed session 15. Jun 20 19:12:49.503958 systemd[1]: Started sshd@15-10.128.0.9:22-147.75.109.163:53224.service - OpenSSH per-connection server daemon (147.75.109.163:53224). Jun 20 19:12:49.805358 sshd[4159]: Accepted publickey for core from 147.75.109.163 port 53224 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:49.807089 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:49.814111 systemd-logind[1458]: New session 16 of user core. Jun 20 19:12:49.819497 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:12:50.161680 sshd[4161]: Connection closed by 147.75.109.163 port 53224 Jun 20 19:12:50.163064 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:50.167252 systemd[1]: sshd@15-10.128.0.9:22-147.75.109.163:53224.service: Deactivated successfully. Jun 20 19:12:50.170052 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:12:50.172471 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:12:50.173958 systemd-logind[1458]: Removed session 16. Jun 20 19:12:50.220684 systemd[1]: Started sshd@16-10.128.0.9:22-147.75.109.163:53236.service - OpenSSH per-connection server daemon (147.75.109.163:53236). Jun 20 19:12:50.515079 sshd[4170]: Accepted publickey for core from 147.75.109.163 port 53236 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:50.516999 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:50.522991 systemd-logind[1458]: New session 17 of user core. Jun 20 19:12:50.527510 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:12:51.430733 sshd[4172]: Connection closed by 147.75.109.163 port 53236 Jun 20 19:12:51.431769 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:51.437569 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:12:51.439627 systemd[1]: sshd@16-10.128.0.9:22-147.75.109.163:53236.service: Deactivated successfully. Jun 20 19:12:51.443203 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:12:51.444839 systemd-logind[1458]: Removed session 17. Jun 20 19:12:51.487107 systemd[1]: Started sshd@17-10.128.0.9:22-147.75.109.163:53240.service - OpenSSH per-connection server daemon (147.75.109.163:53240). Jun 20 19:12:51.782962 sshd[4189]: Accepted publickey for core from 147.75.109.163 port 53240 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:51.784820 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:51.790788 systemd-logind[1458]: New session 18 of user core. Jun 20 19:12:51.796465 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:12:52.203744 sshd[4191]: Connection closed by 147.75.109.163 port 53240 Jun 20 19:12:52.205003 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:52.210460 systemd[1]: sshd@17-10.128.0.9:22-147.75.109.163:53240.service: Deactivated successfully. Jun 20 19:12:52.213350 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:12:52.214606 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:12:52.216170 systemd-logind[1458]: Removed session 18. Jun 20 19:12:52.261727 systemd[1]: Started sshd@18-10.128.0.9:22-147.75.109.163:53244.service - OpenSSH per-connection server daemon (147.75.109.163:53244). Jun 20 19:12:52.557274 sshd[4201]: Accepted publickey for core from 147.75.109.163 port 53244 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:52.559055 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:52.565050 systemd-logind[1458]: New session 19 of user core. Jun 20 19:12:52.573474 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:12:52.842488 sshd[4203]: Connection closed by 147.75.109.163 port 53244 Jun 20 19:12:52.843805 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:52.848437 systemd[1]: sshd@18-10.128.0.9:22-147.75.109.163:53244.service: Deactivated successfully. Jun 20 19:12:52.851432 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:12:52.853769 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:12:52.855666 systemd-logind[1458]: Removed session 19. Jun 20 19:12:57.901643 systemd[1]: Started sshd@19-10.128.0.9:22-147.75.109.163:36960.service - OpenSSH per-connection server daemon (147.75.109.163:36960). Jun 20 19:12:58.200165 sshd[4219]: Accepted publickey for core from 147.75.109.163 port 36960 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:58.202078 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:58.209213 systemd-logind[1458]: New session 20 of user core. Jun 20 19:12:58.213600 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:12:58.489505 sshd[4223]: Connection closed by 147.75.109.163 port 36960 Jun 20 19:12:58.490390 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:58.494931 systemd[1]: sshd@19-10.128.0.9:22-147.75.109.163:36960.service: Deactivated successfully. Jun 20 19:12:58.497649 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:12:58.500052 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:12:58.502068 systemd-logind[1458]: Removed session 20. Jun 20 19:13:03.556813 systemd[1]: Started sshd@20-10.128.0.9:22-147.75.109.163:36966.service - OpenSSH per-connection server daemon (147.75.109.163:36966). Jun 20 19:13:03.851214 sshd[4235]: Accepted publickey for core from 147.75.109.163 port 36966 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:03.853125 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:03.860553 systemd-logind[1458]: New session 21 of user core. Jun 20 19:13:03.864537 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:13:04.146657 sshd[4237]: Connection closed by 147.75.109.163 port 36966 Jun 20 19:13:04.147793 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:04.152236 systemd[1]: sshd@20-10.128.0.9:22-147.75.109.163:36966.service: Deactivated successfully. Jun 20 19:13:04.155251 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:13:04.157433 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:13:04.159707 systemd-logind[1458]: Removed session 21. Jun 20 19:13:09.207666 systemd[1]: Started sshd@21-10.128.0.9:22-147.75.109.163:38482.service - OpenSSH per-connection server daemon (147.75.109.163:38482). Jun 20 19:13:09.500666 sshd[4249]: Accepted publickey for core from 147.75.109.163 port 38482 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:09.502542 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:09.508709 systemd-logind[1458]: New session 22 of user core. Jun 20 19:13:09.515469 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:13:09.793892 sshd[4251]: Connection closed by 147.75.109.163 port 38482 Jun 20 19:13:09.795318 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:09.800760 systemd[1]: sshd@21-10.128.0.9:22-147.75.109.163:38482.service: Deactivated successfully. Jun 20 19:13:09.804403 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:13:09.805734 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:13:09.807336 systemd-logind[1458]: Removed session 22. Jun 20 19:13:09.853684 systemd[1]: Started sshd@22-10.128.0.9:22-147.75.109.163:38484.service - OpenSSH per-connection server daemon (147.75.109.163:38484). Jun 20 19:13:10.145299 sshd[4262]: Accepted publickey for core from 147.75.109.163 port 38484 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:10.146847 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:10.154078 systemd-logind[1458]: New session 23 of user core. Jun 20 19:13:10.163531 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:13:12.707880 containerd[1472]: time="2025-06-20T19:13:12.707579430Z" level=info msg="StopContainer for \"8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a\" with timeout 30 (s)" Jun 20 19:13:12.709003 containerd[1472]: time="2025-06-20T19:13:12.708641014Z" level=info msg="Stop container \"8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a\" with signal terminated" Jun 20 19:13:12.736824 systemd[1]: cri-containerd-8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a.scope: Deactivated successfully. Jun 20 19:13:12.738807 containerd[1472]: time="2025-06-20T19:13:12.738502737Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:13:12.756917 containerd[1472]: time="2025-06-20T19:13:12.756841056Z" level=info msg="StopContainer for \"bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2\" with timeout 2 (s)" Jun 20 19:13:12.757700 containerd[1472]: time="2025-06-20T19:13:12.757579433Z" level=info msg="Stop container \"bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2\" with signal terminated" Jun 20 19:13:12.769724 systemd-networkd[1381]: lxc_health: Link DOWN Jun 20 19:13:12.769739 systemd-networkd[1381]: lxc_health: Lost carrier Jun 20 19:13:12.792694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a-rootfs.mount: Deactivated successfully. Jun 20 19:13:12.795829 systemd[1]: cri-containerd-bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2.scope: Deactivated successfully. Jun 20 19:13:12.796632 systemd[1]: cri-containerd-bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2.scope: Consumed 9.628s CPU time, 125.9M memory peak, 136K read from disk, 13.3M written to disk. Jun 20 19:13:12.823772 containerd[1472]: time="2025-06-20T19:13:12.823611592Z" level=info msg="shim disconnected" id=8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a namespace=k8s.io Jun 20 19:13:12.823772 containerd[1472]: time="2025-06-20T19:13:12.823704270Z" level=warning msg="cleaning up after shim disconnected" id=8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a namespace=k8s.io Jun 20 19:13:12.823772 containerd[1472]: time="2025-06-20T19:13:12.823720890Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:13:12.852430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2-rootfs.mount: Deactivated successfully. Jun 20 19:13:12.856197 containerd[1472]: time="2025-06-20T19:13:12.855964478Z" level=info msg="shim disconnected" id=bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2 namespace=k8s.io Jun 20 19:13:12.856197 containerd[1472]: time="2025-06-20T19:13:12.856196638Z" level=warning msg="cleaning up after shim disconnected" id=bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2 namespace=k8s.io Jun 20 19:13:12.856197 containerd[1472]: time="2025-06-20T19:13:12.856214898Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:13:12.857991 containerd[1472]: time="2025-06-20T19:13:12.857907904Z" level=info msg="StopContainer for \"8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a\" returns successfully" Jun 20 19:13:12.859790 containerd[1472]: time="2025-06-20T19:13:12.859670151Z" level=info msg="StopPodSandbox for \"93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4\"" Jun 20 19:13:12.860088 containerd[1472]: time="2025-06-20T19:13:12.859746109Z" level=info msg="Container to stop \"8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:13:12.867020 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4-shm.mount: Deactivated successfully. Jun 20 19:13:12.875634 systemd[1]: cri-containerd-93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4.scope: Deactivated successfully. Jun 20 19:13:12.897621 containerd[1472]: time="2025-06-20T19:13:12.897120529Z" level=info msg="StopContainer for \"bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2\" returns successfully" Jun 20 19:13:12.900325 containerd[1472]: time="2025-06-20T19:13:12.898006264Z" level=info msg="StopPodSandbox for \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\"" Jun 20 19:13:12.900325 containerd[1472]: time="2025-06-20T19:13:12.898060593Z" level=info msg="Container to stop \"57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:13:12.900325 containerd[1472]: time="2025-06-20T19:13:12.898112960Z" level=info msg="Container to stop \"d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:13:12.900325 containerd[1472]: time="2025-06-20T19:13:12.898129563Z" level=info msg="Container to stop \"bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:13:12.900325 containerd[1472]: time="2025-06-20T19:13:12.898145182Z" level=info msg="Container to stop \"fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:13:12.900325 containerd[1472]: time="2025-06-20T19:13:12.898160538Z" level=info msg="Container to stop \"efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:13:12.904156 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4-shm.mount: Deactivated successfully. Jun 20 19:13:12.914975 systemd[1]: cri-containerd-e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4.scope: Deactivated successfully. Jun 20 19:13:12.926723 containerd[1472]: time="2025-06-20T19:13:12.926436862Z" level=info msg="shim disconnected" id=93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4 namespace=k8s.io Jun 20 19:13:12.926723 containerd[1472]: time="2025-06-20T19:13:12.926523374Z" level=warning msg="cleaning up after shim disconnected" id=93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4 namespace=k8s.io Jun 20 19:13:12.926723 containerd[1472]: time="2025-06-20T19:13:12.926539119Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:13:12.952376 containerd[1472]: time="2025-06-20T19:13:12.952293128Z" level=warning msg="cleanup warnings time=\"2025-06-20T19:13:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 19:13:12.955080 containerd[1472]: time="2025-06-20T19:13:12.955027460Z" level=info msg="TearDown network for sandbox \"93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4\" successfully" Jun 20 19:13:12.955386 containerd[1472]: time="2025-06-20T19:13:12.955231515Z" level=info msg="StopPodSandbox for \"93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4\" returns successfully" Jun 20 19:13:12.964861 containerd[1472]: time="2025-06-20T19:13:12.964541097Z" level=info msg="shim disconnected" id=e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4 namespace=k8s.io Jun 20 19:13:12.964861 containerd[1472]: time="2025-06-20T19:13:12.964618281Z" level=warning msg="cleaning up after shim disconnected" id=e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4 namespace=k8s.io Jun 20 19:13:12.964861 containerd[1472]: time="2025-06-20T19:13:12.964632613Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:13:12.991932 containerd[1472]: time="2025-06-20T19:13:12.991882935Z" level=info msg="TearDown network for sandbox \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\" successfully" Jun 20 19:13:12.992088 containerd[1472]: time="2025-06-20T19:13:12.992013157Z" level=info msg="StopPodSandbox for \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\" returns successfully" Jun 20 19:13:13.041506 kubelet[2646]: I0620 19:13:13.041430 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef-cilium-config-path\") pod \"8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef\" (UID: \"8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef\") " Jun 20 19:13:13.042158 kubelet[2646]: I0620 19:13:13.041532 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwplh\" (UniqueName: \"kubernetes.io/projected/8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef-kube-api-access-jwplh\") pod \"8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef\" (UID: \"8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef\") " Jun 20 19:13:13.046162 kubelet[2646]: I0620 19:13:13.046070 2646 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef" (UID: "8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:13:13.046375 kubelet[2646]: I0620 19:13:13.046191 2646 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef-kube-api-access-jwplh" (OuterVolumeSpecName: "kube-api-access-jwplh") pod "8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef" (UID: "8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef"). InnerVolumeSpecName "kube-api-access-jwplh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:13:13.142852 kubelet[2646]: I0620 19:13:13.142783 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-cni-path\") pod \"d81aeab4-73ff-406b-b857-719d4be7d85e\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " Jun 20 19:13:13.142852 kubelet[2646]: I0620 19:13:13.142848 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-host-proc-sys-kernel\") pod \"d81aeab4-73ff-406b-b857-719d4be7d85e\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " Jun 20 19:13:13.143103 kubelet[2646]: I0620 19:13:13.142886 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d81aeab4-73ff-406b-b857-719d4be7d85e-hubble-tls\") pod \"d81aeab4-73ff-406b-b857-719d4be7d85e\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " Jun 20 19:13:13.143103 kubelet[2646]: I0620 19:13:13.142913 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-cilium-run\") pod \"d81aeab4-73ff-406b-b857-719d4be7d85e\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " Jun 20 19:13:13.143103 kubelet[2646]: I0620 19:13:13.142933 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-cilium-cgroup\") pod \"d81aeab4-73ff-406b-b857-719d4be7d85e\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " Jun 20 19:13:13.143103 kubelet[2646]: I0620 19:13:13.142960 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-host-proc-sys-net\") pod \"d81aeab4-73ff-406b-b857-719d4be7d85e\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " Jun 20 19:13:13.143103 kubelet[2646]: I0620 19:13:13.142983 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-etc-cni-netd\") pod \"d81aeab4-73ff-406b-b857-719d4be7d85e\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " Jun 20 19:13:13.143103 kubelet[2646]: I0620 19:13:13.143012 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d81aeab4-73ff-406b-b857-719d4be7d85e-cilium-config-path\") pod \"d81aeab4-73ff-406b-b857-719d4be7d85e\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " Jun 20 19:13:13.143442 kubelet[2646]: I0620 19:13:13.143038 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-lib-modules\") pod \"d81aeab4-73ff-406b-b857-719d4be7d85e\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " Jun 20 19:13:13.143442 kubelet[2646]: I0620 19:13:13.143062 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-bpf-maps\") pod \"d81aeab4-73ff-406b-b857-719d4be7d85e\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " Jun 20 19:13:13.143442 kubelet[2646]: I0620 19:13:13.143087 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-xtables-lock\") pod \"d81aeab4-73ff-406b-b857-719d4be7d85e\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " Jun 20 19:13:13.143442 kubelet[2646]: I0620 19:13:13.143115 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-hostproc\") pod \"d81aeab4-73ff-406b-b857-719d4be7d85e\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " Jun 20 19:13:13.143442 kubelet[2646]: I0620 19:13:13.143146 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d81aeab4-73ff-406b-b857-719d4be7d85e-clustermesh-secrets\") pod \"d81aeab4-73ff-406b-b857-719d4be7d85e\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " Jun 20 19:13:13.143442 kubelet[2646]: I0620 19:13:13.143180 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nst2f\" (UniqueName: \"kubernetes.io/projected/d81aeab4-73ff-406b-b857-719d4be7d85e-kube-api-access-nst2f\") pod \"d81aeab4-73ff-406b-b857-719d4be7d85e\" (UID: \"d81aeab4-73ff-406b-b857-719d4be7d85e\") " Jun 20 19:13:13.145437 kubelet[2646]: I0620 19:13:13.143242 2646 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef-cilium-config-path\") on node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:13.145437 kubelet[2646]: I0620 19:13:13.143290 2646 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jwplh\" (UniqueName: \"kubernetes.io/projected/8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef-kube-api-access-jwplh\") on node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:13.145437 kubelet[2646]: I0620 19:13:13.143721 2646 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d81aeab4-73ff-406b-b857-719d4be7d85e" (UID: "d81aeab4-73ff-406b-b857-719d4be7d85e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:13.145437 kubelet[2646]: I0620 19:13:13.143789 2646 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-cni-path" (OuterVolumeSpecName: "cni-path") pod "d81aeab4-73ff-406b-b857-719d4be7d85e" (UID: "d81aeab4-73ff-406b-b857-719d4be7d85e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:13.145437 kubelet[2646]: I0620 19:13:13.143818 2646 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d81aeab4-73ff-406b-b857-719d4be7d85e" (UID: "d81aeab4-73ff-406b-b857-719d4be7d85e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:13.148546 kubelet[2646]: I0620 19:13:13.148503 2646 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d81aeab4-73ff-406b-b857-719d4be7d85e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d81aeab4-73ff-406b-b857-719d4be7d85e" (UID: "d81aeab4-73ff-406b-b857-719d4be7d85e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:13:13.148776 kubelet[2646]: I0620 19:13:13.148746 2646 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d81aeab4-73ff-406b-b857-719d4be7d85e" (UID: "d81aeab4-73ff-406b-b857-719d4be7d85e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:13.149037 kubelet[2646]: I0620 19:13:13.149011 2646 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d81aeab4-73ff-406b-b857-719d4be7d85e" (UID: "d81aeab4-73ff-406b-b857-719d4be7d85e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:13.149173 kubelet[2646]: I0620 19:13:13.149151 2646 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d81aeab4-73ff-406b-b857-719d4be7d85e" (UID: "d81aeab4-73ff-406b-b857-719d4be7d85e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:13.149754 kubelet[2646]: I0620 19:13:13.149717 2646 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d81aeab4-73ff-406b-b857-719d4be7d85e-kube-api-access-nst2f" (OuterVolumeSpecName: "kube-api-access-nst2f") pod "d81aeab4-73ff-406b-b857-719d4be7d85e" (UID: "d81aeab4-73ff-406b-b857-719d4be7d85e"). InnerVolumeSpecName "kube-api-access-nst2f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:13:13.149850 kubelet[2646]: I0620 19:13:13.149778 2646 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d81aeab4-73ff-406b-b857-719d4be7d85e" (UID: "d81aeab4-73ff-406b-b857-719d4be7d85e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:13.149850 kubelet[2646]: I0620 19:13:13.149806 2646 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d81aeab4-73ff-406b-b857-719d4be7d85e" (UID: "d81aeab4-73ff-406b-b857-719d4be7d85e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:13.149850 kubelet[2646]: I0620 19:13:13.149841 2646 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d81aeab4-73ff-406b-b857-719d4be7d85e" (UID: "d81aeab4-73ff-406b-b857-719d4be7d85e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:13.150078 kubelet[2646]: I0620 19:13:13.149868 2646 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-hostproc" (OuterVolumeSpecName: "hostproc") pod "d81aeab4-73ff-406b-b857-719d4be7d85e" (UID: "d81aeab4-73ff-406b-b857-719d4be7d85e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:13.151290 kubelet[2646]: I0620 19:13:13.151232 2646 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d81aeab4-73ff-406b-b857-719d4be7d85e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d81aeab4-73ff-406b-b857-719d4be7d85e" (UID: "d81aeab4-73ff-406b-b857-719d4be7d85e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:13:13.153223 kubelet[2646]: I0620 19:13:13.153188 2646 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81aeab4-73ff-406b-b857-719d4be7d85e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d81aeab4-73ff-406b-b857-719d4be7d85e" (UID: "d81aeab4-73ff-406b-b857-719d4be7d85e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:13:13.192352 kubelet[2646]: E0620 19:13:13.192297 2646 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:13:13.244290 kubelet[2646]: I0620 19:13:13.244203 2646 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-lib-modules\") on node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:13.244290 kubelet[2646]: I0620 19:13:13.244249 2646 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-bpf-maps\") on node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:13.244290 kubelet[2646]: I0620 19:13:13.244298 2646 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-xtables-lock\") on node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:13.244591 kubelet[2646]: I0620 19:13:13.244314 2646 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-hostproc\") on node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:13.244591 kubelet[2646]: I0620 19:13:13.244331 2646 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d81aeab4-73ff-406b-b857-719d4be7d85e-clustermesh-secrets\") on node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:13.244591 kubelet[2646]: I0620 19:13:13.244346 2646 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nst2f\" (UniqueName: \"kubernetes.io/projected/d81aeab4-73ff-406b-b857-719d4be7d85e-kube-api-access-nst2f\") on node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:13.244591 kubelet[2646]: I0620 19:13:13.244363 2646 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-cni-path\") on node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:13.244591 kubelet[2646]: I0620 19:13:13.244381 2646 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-host-proc-sys-kernel\") on node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:13.244591 kubelet[2646]: I0620 19:13:13.244396 2646 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d81aeab4-73ff-406b-b857-719d4be7d85e-hubble-tls\") on node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:13.244591 kubelet[2646]: I0620 19:13:13.244410 2646 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-cilium-run\") on node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:13.244824 kubelet[2646]: I0620 19:13:13.244425 2646 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-cilium-cgroup\") on node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:13.244824 kubelet[2646]: I0620 19:13:13.244441 2646 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-host-proc-sys-net\") on node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:13.244824 kubelet[2646]: I0620 19:13:13.244456 2646 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d81aeab4-73ff-406b-b857-719d4be7d85e-etc-cni-netd\") on node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:13.244824 kubelet[2646]: I0620 19:13:13.244470 2646 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d81aeab4-73ff-406b-b857-719d4be7d85e-cilium-config-path\") on node \"ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:13.491987 kubelet[2646]: I0620 19:13:13.490507 2646 scope.go:117] "RemoveContainer" containerID="8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a" Jun 20 19:13:13.493594 containerd[1472]: time="2025-06-20T19:13:13.493550510Z" level=info msg="RemoveContainer for \"8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a\"" Jun 20 19:13:13.500409 systemd[1]: Removed slice kubepods-besteffort-pod8e56f8cd_6ce0_4a0a_8da1_cbcfbdc797ef.slice - libcontainer container kubepods-besteffort-pod8e56f8cd_6ce0_4a0a_8da1_cbcfbdc797ef.slice. Jun 20 19:13:13.508072 containerd[1472]: time="2025-06-20T19:13:13.508028239Z" level=info msg="RemoveContainer for \"8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a\" returns successfully" Jun 20 19:13:13.513038 kubelet[2646]: I0620 19:13:13.512837 2646 scope.go:117] "RemoveContainer" containerID="8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a" Jun 20 19:13:13.514156 systemd[1]: Removed slice kubepods-burstable-podd81aeab4_73ff_406b_b857_719d4be7d85e.slice - libcontainer container kubepods-burstable-podd81aeab4_73ff_406b_b857_719d4be7d85e.slice. Jun 20 19:13:13.515743 systemd[1]: kubepods-burstable-podd81aeab4_73ff_406b_b857_719d4be7d85e.slice: Consumed 9.762s CPU time, 126.3M memory peak, 136K read from disk, 13.3M written to disk. Jun 20 19:13:13.516644 containerd[1472]: time="2025-06-20T19:13:13.516465493Z" level=error msg="ContainerStatus for \"8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a\": not found" Jun 20 19:13:13.518089 kubelet[2646]: E0620 19:13:13.517931 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a\": not found" containerID="8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a" Jun 20 19:13:13.518089 kubelet[2646]: I0620 19:13:13.517973 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a"} err="failed to get container status \"8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b3e025ae1734f38482f6d1cfb1622ea83359ee13fee940cf934dacde265971a\": not found" Jun 20 19:13:13.518089 kubelet[2646]: I0620 19:13:13.518041 2646 scope.go:117] "RemoveContainer" containerID="bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2" Jun 20 19:13:13.519989 containerd[1472]: time="2025-06-20T19:13:13.519890341Z" level=info msg="RemoveContainer for \"bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2\"" Jun 20 19:13:13.527866 containerd[1472]: time="2025-06-20T19:13:13.527735428Z" level=info msg="RemoveContainer for \"bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2\" returns successfully" Jun 20 19:13:13.528306 kubelet[2646]: I0620 19:13:13.528229 2646 scope.go:117] "RemoveContainer" containerID="d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f" Jun 20 19:13:13.532041 containerd[1472]: time="2025-06-20T19:13:13.531506789Z" level=info msg="RemoveContainer for \"d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f\"" Jun 20 19:13:13.538290 containerd[1472]: time="2025-06-20T19:13:13.538218330Z" level=info msg="RemoveContainer for \"d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f\" returns successfully" Jun 20 19:13:13.538719 kubelet[2646]: I0620 19:13:13.538691 2646 scope.go:117] "RemoveContainer" containerID="57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146" Jun 20 19:13:13.541022 containerd[1472]: time="2025-06-20T19:13:13.540992121Z" level=info msg="RemoveContainer for \"57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146\"" Jun 20 19:13:13.548928 containerd[1472]: time="2025-06-20T19:13:13.548787077Z" level=info msg="RemoveContainer for \"57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146\" returns successfully" Jun 20 19:13:13.549189 kubelet[2646]: I0620 19:13:13.549161 2646 scope.go:117] "RemoveContainer" containerID="efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b" Jun 20 19:13:13.555294 containerd[1472]: time="2025-06-20T19:13:13.554409737Z" level=info msg="RemoveContainer for \"efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b\"" Jun 20 19:13:13.562102 containerd[1472]: time="2025-06-20T19:13:13.562052800Z" level=info msg="RemoveContainer for \"efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b\" returns successfully" Jun 20 19:13:13.562339 kubelet[2646]: I0620 19:13:13.562306 2646 scope.go:117] "RemoveContainer" containerID="fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165" Jun 20 19:13:13.565416 containerd[1472]: time="2025-06-20T19:13:13.565358255Z" level=info msg="RemoveContainer for \"fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165\"" Jun 20 19:13:13.573390 containerd[1472]: time="2025-06-20T19:13:13.573227404Z" level=info msg="RemoveContainer for \"fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165\" returns successfully" Jun 20 19:13:13.574416 kubelet[2646]: I0620 19:13:13.574323 2646 scope.go:117] "RemoveContainer" containerID="bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2" Jun 20 19:13:13.574832 containerd[1472]: time="2025-06-20T19:13:13.574786662Z" level=error msg="ContainerStatus for \"bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2\": not found" Jun 20 19:13:13.577282 kubelet[2646]: E0620 19:13:13.575040 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2\": not found" containerID="bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2" Jun 20 19:13:13.577282 kubelet[2646]: I0620 19:13:13.575083 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2"} err="failed to get container status \"bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb93e2335907b3341dc24d148cf0ec787b4af101913a8561dc188cdc34aa9bb2\": not found" Jun 20 19:13:13.577282 kubelet[2646]: I0620 19:13:13.575112 2646 scope.go:117] "RemoveContainer" containerID="d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f" Jun 20 19:13:13.577537 containerd[1472]: time="2025-06-20T19:13:13.575514648Z" level=error msg="ContainerStatus for \"d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f\": not found" Jun 20 19:13:13.578009 kubelet[2646]: E0620 19:13:13.577803 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f\": not found" containerID="d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f" Jun 20 19:13:13.578009 kubelet[2646]: I0620 19:13:13.577840 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f"} err="failed to get container status \"d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"d38fcb6e893320ff5a4eead02cbd55385acf2c6c02b1594ccce63ef6ae590b1f\": not found" Jun 20 19:13:13.578009 kubelet[2646]: I0620 19:13:13.577867 2646 scope.go:117] "RemoveContainer" containerID="57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146" Jun 20 19:13:13.578636 containerd[1472]: time="2025-06-20T19:13:13.578347780Z" level=error msg="ContainerStatus for \"57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146\": not found" Jun 20 19:13:13.579197 kubelet[2646]: E0620 19:13:13.578829 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146\": not found" containerID="57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146" Jun 20 19:13:13.579197 kubelet[2646]: I0620 19:13:13.578868 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146"} err="failed to get container status \"57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146\": rpc error: code = NotFound desc = an error occurred when try to find container \"57f60f9ace00560b9078a31fccf88018a703d7de065326ebdcf5a1ea74b48146\": not found" Jun 20 19:13:13.579197 kubelet[2646]: I0620 19:13:13.578892 2646 scope.go:117] "RemoveContainer" containerID="efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b" Jun 20 19:13:13.579412 containerd[1472]: time="2025-06-20T19:13:13.579119790Z" level=error msg="ContainerStatus for \"efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b\": not found" Jun 20 19:13:13.579769 kubelet[2646]: E0620 19:13:13.579575 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b\": not found" containerID="efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b" Jun 20 19:13:13.579769 kubelet[2646]: I0620 19:13:13.579611 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b"} err="failed to get container status \"efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b\": rpc error: code = NotFound desc = an error occurred when try to find container \"efad8cf0d9e8707e79eefc5112a4443a90003bbc6cb9113afc1251b95288ab2b\": not found" Jun 20 19:13:13.579769 kubelet[2646]: I0620 19:13:13.579643 2646 scope.go:117] "RemoveContainer" containerID="fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165" Jun 20 19:13:13.580086 containerd[1472]: time="2025-06-20T19:13:13.579943570Z" level=error msg="ContainerStatus for \"fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165\": not found" Jun 20 19:13:13.581473 kubelet[2646]: E0620 19:13:13.581405 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165\": not found" containerID="fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165" Jun 20 19:13:13.581473 kubelet[2646]: I0620 19:13:13.581447 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165"} err="failed to get container status \"fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa856dd8b89842d2b6fa423a9fa70d1f06728b88383eb6fccc93656e5b517165\": not found" Jun 20 19:13:13.689628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4-rootfs.mount: Deactivated successfully. Jun 20 19:13:13.689791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4-rootfs.mount: Deactivated successfully. Jun 20 19:13:13.689932 systemd[1]: var-lib-kubelet-pods-8e56f8cd\x2d6ce0\x2d4a0a\x2d8da1\x2dcbcfbdc797ef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djwplh.mount: Deactivated successfully. Jun 20 19:13:13.690050 systemd[1]: var-lib-kubelet-pods-d81aeab4\x2d73ff\x2d406b\x2db857\x2d719d4be7d85e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnst2f.mount: Deactivated successfully. Jun 20 19:13:13.690156 systemd[1]: var-lib-kubelet-pods-d81aeab4\x2d73ff\x2d406b\x2db857\x2d719d4be7d85e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 19:13:13.690632 systemd[1]: var-lib-kubelet-pods-d81aeab4\x2d73ff\x2d406b\x2db857\x2d719d4be7d85e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 19:13:14.042945 kubelet[2646]: I0620 19:13:14.042872 2646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef" path="/var/lib/kubelet/pods/8e56f8cd-6ce0-4a0a-8da1-cbcfbdc797ef/volumes" Jun 20 19:13:14.043735 kubelet[2646]: I0620 19:13:14.043681 2646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d81aeab4-73ff-406b-b857-719d4be7d85e" path="/var/lib/kubelet/pods/d81aeab4-73ff-406b-b857-719d4be7d85e/volumes" Jun 20 19:13:14.641948 sshd[4264]: Connection closed by 147.75.109.163 port 38484 Jun 20 19:13:14.642572 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:14.648853 systemd[1]: sshd@22-10.128.0.9:22-147.75.109.163:38484.service: Deactivated successfully. Jun 20 19:13:14.651900 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:13:14.652217 systemd[1]: session-23.scope: Consumed 1.748s CPU time, 23.8M memory peak. Jun 20 19:13:14.653374 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:13:14.655619 systemd-logind[1458]: Removed session 23. Jun 20 19:13:14.699681 systemd[1]: Started sshd@23-10.128.0.9:22-147.75.109.163:38490.service - OpenSSH per-connection server daemon (147.75.109.163:38490). Jun 20 19:13:14.996240 sshd[4425]: Accepted publickey for core from 147.75.109.163 port 38490 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:14.998194 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:15.005915 systemd-logind[1458]: New session 24 of user core. Jun 20 19:13:15.010757 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:13:15.629338 ntpd[1440]: Deleting interface #11 lxc_health, fe80::e805:53ff:fe28:1922%8#123, interface stats: received=0, sent=0, dropped=0, active_time=90 secs Jun 20 19:13:15.629828 ntpd[1440]: 20 Jun 19:13:15 ntpd[1440]: Deleting interface #11 lxc_health, fe80::e805:53ff:fe28:1922%8#123, interface stats: received=0, sent=0, dropped=0, active_time=90 secs Jun 20 19:13:16.148529 sshd[4427]: Connection closed by 147.75.109.163 port 38490 Jun 20 19:13:16.150335 systemd[1]: Created slice kubepods-burstable-podf06e6734_3edf_41db_8a3a_666e7827268f.slice - libcontainer container kubepods-burstable-podf06e6734_3edf_41db_8a3a_666e7827268f.slice. Jun 20 19:13:16.152380 sshd-session[4425]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:16.167241 systemd-logind[1458]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:13:16.168131 systemd[1]: sshd@23-10.128.0.9:22-147.75.109.163:38490.service: Deactivated successfully. Jun 20 19:13:16.175384 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:13:16.180022 systemd-logind[1458]: Removed session 24. Jun 20 19:13:16.212680 systemd[1]: Started sshd@24-10.128.0.9:22-147.75.109.163:49968.service - OpenSSH per-connection server daemon (147.75.109.163:49968). Jun 20 19:13:16.260603 kubelet[2646]: I0620 19:13:16.260542 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f06e6734-3edf-41db-8a3a-666e7827268f-cilium-run\") pod \"cilium-q9cg2\" (UID: \"f06e6734-3edf-41db-8a3a-666e7827268f\") " pod="kube-system/cilium-q9cg2" Jun 20 19:13:16.260603 kubelet[2646]: I0620 19:13:16.260590 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f06e6734-3edf-41db-8a3a-666e7827268f-xtables-lock\") pod \"cilium-q9cg2\" (UID: \"f06e6734-3edf-41db-8a3a-666e7827268f\") " pod="kube-system/cilium-q9cg2" Jun 20 19:13:16.261211 kubelet[2646]: I0620 19:13:16.260634 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f06e6734-3edf-41db-8a3a-666e7827268f-hubble-tls\") pod \"cilium-q9cg2\" (UID: \"f06e6734-3edf-41db-8a3a-666e7827268f\") " pod="kube-system/cilium-q9cg2" Jun 20 19:13:16.261211 kubelet[2646]: I0620 19:13:16.260662 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kd5t\" (UniqueName: \"kubernetes.io/projected/f06e6734-3edf-41db-8a3a-666e7827268f-kube-api-access-2kd5t\") pod \"cilium-q9cg2\" (UID: \"f06e6734-3edf-41db-8a3a-666e7827268f\") " pod="kube-system/cilium-q9cg2" Jun 20 19:13:16.261211 kubelet[2646]: I0620 19:13:16.260687 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f06e6734-3edf-41db-8a3a-666e7827268f-hostproc\") pod \"cilium-q9cg2\" (UID: \"f06e6734-3edf-41db-8a3a-666e7827268f\") " pod="kube-system/cilium-q9cg2" Jun 20 19:13:16.261211 kubelet[2646]: I0620 19:13:16.260723 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f06e6734-3edf-41db-8a3a-666e7827268f-cni-path\") pod \"cilium-q9cg2\" (UID: \"f06e6734-3edf-41db-8a3a-666e7827268f\") " pod="kube-system/cilium-q9cg2" Jun 20 19:13:16.261211 kubelet[2646]: I0620 19:13:16.260750 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f06e6734-3edf-41db-8a3a-666e7827268f-clustermesh-secrets\") pod \"cilium-q9cg2\" (UID: \"f06e6734-3edf-41db-8a3a-666e7827268f\") " pod="kube-system/cilium-q9cg2" Jun 20 19:13:16.261211 kubelet[2646]: I0620 19:13:16.260779 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f06e6734-3edf-41db-8a3a-666e7827268f-etc-cni-netd\") pod \"cilium-q9cg2\" (UID: \"f06e6734-3edf-41db-8a3a-666e7827268f\") " pod="kube-system/cilium-q9cg2" Jun 20 19:13:16.261471 kubelet[2646]: I0620 19:13:16.260807 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f06e6734-3edf-41db-8a3a-666e7827268f-bpf-maps\") pod \"cilium-q9cg2\" (UID: \"f06e6734-3edf-41db-8a3a-666e7827268f\") " pod="kube-system/cilium-q9cg2" Jun 20 19:13:16.261471 kubelet[2646]: I0620 19:13:16.260832 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f06e6734-3edf-41db-8a3a-666e7827268f-cilium-config-path\") pod \"cilium-q9cg2\" (UID: \"f06e6734-3edf-41db-8a3a-666e7827268f\") " pod="kube-system/cilium-q9cg2" Jun 20 19:13:16.261471 kubelet[2646]: I0620 19:13:16.260858 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f06e6734-3edf-41db-8a3a-666e7827268f-cilium-ipsec-secrets\") pod \"cilium-q9cg2\" (UID: \"f06e6734-3edf-41db-8a3a-666e7827268f\") " pod="kube-system/cilium-q9cg2" Jun 20 19:13:16.261471 kubelet[2646]: I0620 19:13:16.260884 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f06e6734-3edf-41db-8a3a-666e7827268f-host-proc-sys-net\") pod \"cilium-q9cg2\" (UID: \"f06e6734-3edf-41db-8a3a-666e7827268f\") " pod="kube-system/cilium-q9cg2" Jun 20 19:13:16.261471 kubelet[2646]: I0620 19:13:16.260911 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f06e6734-3edf-41db-8a3a-666e7827268f-cilium-cgroup\") pod \"cilium-q9cg2\" (UID: \"f06e6734-3edf-41db-8a3a-666e7827268f\") " pod="kube-system/cilium-q9cg2" Jun 20 19:13:16.261619 kubelet[2646]: I0620 19:13:16.260939 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f06e6734-3edf-41db-8a3a-666e7827268f-host-proc-sys-kernel\") pod \"cilium-q9cg2\" (UID: \"f06e6734-3edf-41db-8a3a-666e7827268f\") " pod="kube-system/cilium-q9cg2" Jun 20 19:13:16.261619 kubelet[2646]: I0620 19:13:16.260966 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f06e6734-3edf-41db-8a3a-666e7827268f-lib-modules\") pod \"cilium-q9cg2\" (UID: \"f06e6734-3edf-41db-8a3a-666e7827268f\") " pod="kube-system/cilium-q9cg2" Jun 20 19:13:16.461550 containerd[1472]: time="2025-06-20T19:13:16.460806642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q9cg2,Uid:f06e6734-3edf-41db-8a3a-666e7827268f,Namespace:kube-system,Attempt:0,}" Jun 20 19:13:16.498197 containerd[1472]: time="2025-06-20T19:13:16.498056999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:13:16.498197 containerd[1472]: time="2025-06-20T19:13:16.498116847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:13:16.498197 containerd[1472]: time="2025-06-20T19:13:16.498135577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:13:16.498664 containerd[1472]: time="2025-06-20T19:13:16.498253753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:13:16.526004 sshd[4438]: Accepted publickey for core from 147.75.109.163 port 49968 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:16.528625 sshd-session[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:16.530761 systemd[1]: Started cri-containerd-467e99c30d698871e820ab3ef13fead9e745c2b5befc32abbf3fd3c19d865b7d.scope - libcontainer container 467e99c30d698871e820ab3ef13fead9e745c2b5befc32abbf3fd3c19d865b7d. Jun 20 19:13:16.540040 systemd-logind[1458]: New session 25 of user core. Jun 20 19:13:16.542008 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 19:13:16.573129 containerd[1472]: time="2025-06-20T19:13:16.572858497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q9cg2,Uid:f06e6734-3edf-41db-8a3a-666e7827268f,Namespace:kube-system,Attempt:0,} returns sandbox id \"467e99c30d698871e820ab3ef13fead9e745c2b5befc32abbf3fd3c19d865b7d\"" Jun 20 19:13:16.581691 containerd[1472]: time="2025-06-20T19:13:16.581632959Z" level=info msg="CreateContainer within sandbox \"467e99c30d698871e820ab3ef13fead9e745c2b5befc32abbf3fd3c19d865b7d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:13:16.595573 containerd[1472]: time="2025-06-20T19:13:16.595500687Z" level=info msg="CreateContainer within sandbox \"467e99c30d698871e820ab3ef13fead9e745c2b5befc32abbf3fd3c19d865b7d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5015c05a9ab910f38e36b3411cab6255f464e320aa4119d11b80a3e88c6b5c88\"" Jun 20 19:13:16.596317 containerd[1472]: time="2025-06-20T19:13:16.596116216Z" level=info msg="StartContainer for \"5015c05a9ab910f38e36b3411cab6255f464e320aa4119d11b80a3e88c6b5c88\"" Jun 20 19:13:16.634474 systemd[1]: Started cri-containerd-5015c05a9ab910f38e36b3411cab6255f464e320aa4119d11b80a3e88c6b5c88.scope - libcontainer container 5015c05a9ab910f38e36b3411cab6255f464e320aa4119d11b80a3e88c6b5c88. Jun 20 19:13:16.686751 containerd[1472]: time="2025-06-20T19:13:16.686499873Z" level=info msg="StartContainer for \"5015c05a9ab910f38e36b3411cab6255f464e320aa4119d11b80a3e88c6b5c88\" returns successfully" Jun 20 19:13:16.697564 systemd[1]: cri-containerd-5015c05a9ab910f38e36b3411cab6255f464e320aa4119d11b80a3e88c6b5c88.scope: Deactivated successfully. Jun 20 19:13:16.737392 sshd[4477]: Connection closed by 147.75.109.163 port 49968 Jun 20 19:13:16.739654 sshd-session[4438]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:16.748358 systemd[1]: sshd@24-10.128.0.9:22-147.75.109.163:49968.service: Deactivated successfully. Jun 20 19:13:16.751952 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 19:13:16.753689 systemd-logind[1458]: Session 25 logged out. Waiting for processes to exit. Jun 20 19:13:16.755733 systemd-logind[1458]: Removed session 25. Jun 20 19:13:16.758129 containerd[1472]: time="2025-06-20T19:13:16.758059710Z" level=info msg="shim disconnected" id=5015c05a9ab910f38e36b3411cab6255f464e320aa4119d11b80a3e88c6b5c88 namespace=k8s.io Jun 20 19:13:16.758317 containerd[1472]: time="2025-06-20T19:13:16.758207441Z" level=warning msg="cleaning up after shim disconnected" id=5015c05a9ab910f38e36b3411cab6255f464e320aa4119d11b80a3e88c6b5c88 namespace=k8s.io Jun 20 19:13:16.758405 containerd[1472]: time="2025-06-20T19:13:16.758230109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:13:16.792913 systemd[1]: Started sshd@25-10.128.0.9:22-147.75.109.163:49982.service - OpenSSH per-connection server daemon (147.75.109.163:49982). Jun 20 19:13:17.090500 sshd[4552]: Accepted publickey for core from 147.75.109.163 port 49982 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:17.092680 sshd-session[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:17.099371 systemd-logind[1458]: New session 26 of user core. Jun 20 19:13:17.105490 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 19:13:17.533972 containerd[1472]: time="2025-06-20T19:13:17.533920133Z" level=info msg="CreateContainer within sandbox \"467e99c30d698871e820ab3ef13fead9e745c2b5befc32abbf3fd3c19d865b7d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:13:17.564939 containerd[1472]: time="2025-06-20T19:13:17.564868342Z" level=info msg="CreateContainer within sandbox \"467e99c30d698871e820ab3ef13fead9e745c2b5befc32abbf3fd3c19d865b7d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"edd6a95f12bbbe1cccd5dd6fcd69a48d33266c9abdc2d1aece7e2966921a47ce\"" Jun 20 19:13:17.566852 containerd[1472]: time="2025-06-20T19:13:17.565590652Z" level=info msg="StartContainer for \"edd6a95f12bbbe1cccd5dd6fcd69a48d33266c9abdc2d1aece7e2966921a47ce\"" Jun 20 19:13:17.619527 systemd[1]: Started cri-containerd-edd6a95f12bbbe1cccd5dd6fcd69a48d33266c9abdc2d1aece7e2966921a47ce.scope - libcontainer container edd6a95f12bbbe1cccd5dd6fcd69a48d33266c9abdc2d1aece7e2966921a47ce. Jun 20 19:13:17.659571 containerd[1472]: time="2025-06-20T19:13:17.659420720Z" level=info msg="StartContainer for \"edd6a95f12bbbe1cccd5dd6fcd69a48d33266c9abdc2d1aece7e2966921a47ce\" returns successfully" Jun 20 19:13:17.668569 systemd[1]: cri-containerd-edd6a95f12bbbe1cccd5dd6fcd69a48d33266c9abdc2d1aece7e2966921a47ce.scope: Deactivated successfully. Jun 20 19:13:17.701078 containerd[1472]: time="2025-06-20T19:13:17.700878386Z" level=info msg="shim disconnected" id=edd6a95f12bbbe1cccd5dd6fcd69a48d33266c9abdc2d1aece7e2966921a47ce namespace=k8s.io Jun 20 19:13:17.701078 containerd[1472]: time="2025-06-20T19:13:17.700952956Z" level=warning msg="cleaning up after shim disconnected" id=edd6a95f12bbbe1cccd5dd6fcd69a48d33266c9abdc2d1aece7e2966921a47ce namespace=k8s.io Jun 20 19:13:17.701078 containerd[1472]: time="2025-06-20T19:13:17.700967710Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:13:17.977651 containerd[1472]: time="2025-06-20T19:13:17.977520068Z" level=info msg="StopPodSandbox for \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\"" Jun 20 19:13:17.978039 containerd[1472]: time="2025-06-20T19:13:17.977688680Z" level=info msg="TearDown network for sandbox \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\" successfully" Jun 20 19:13:17.978039 containerd[1472]: time="2025-06-20T19:13:17.977709305Z" level=info msg="StopPodSandbox for \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\" returns successfully" Jun 20 19:13:17.978310 containerd[1472]: time="2025-06-20T19:13:17.978209875Z" level=info msg="RemovePodSandbox for \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\"" Jun 20 19:13:17.978310 containerd[1472]: time="2025-06-20T19:13:17.978247423Z" level=info msg="Forcibly stopping sandbox \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\"" Jun 20 19:13:17.978430 containerd[1472]: time="2025-06-20T19:13:17.978358712Z" level=info msg="TearDown network for sandbox \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\" successfully" Jun 20 19:13:17.983416 containerd[1472]: time="2025-06-20T19:13:17.983208089Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 19:13:17.983416 containerd[1472]: time="2025-06-20T19:13:17.983343589Z" level=info msg="RemovePodSandbox \"e1db960ae60696b47d5e56a0942ae67667cf05eea1da1c7055b6ca78e19f2be4\" returns successfully" Jun 20 19:13:17.984167 containerd[1472]: time="2025-06-20T19:13:17.984134277Z" level=info msg="StopPodSandbox for \"93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4\"" Jun 20 19:13:17.984311 containerd[1472]: time="2025-06-20T19:13:17.984254442Z" level=info msg="TearDown network for sandbox \"93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4\" successfully" Jun 20 19:13:17.984311 containerd[1472]: time="2025-06-20T19:13:17.984297747Z" level=info msg="StopPodSandbox for \"93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4\" returns successfully" Jun 20 19:13:17.984808 containerd[1472]: time="2025-06-20T19:13:17.984648664Z" level=info msg="RemovePodSandbox for \"93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4\"" Jun 20 19:13:17.984808 containerd[1472]: time="2025-06-20T19:13:17.984688414Z" level=info msg="Forcibly stopping sandbox \"93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4\"" Jun 20 19:13:17.984808 containerd[1472]: time="2025-06-20T19:13:17.984771120Z" level=info msg="TearDown network for sandbox \"93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4\" successfully" Jun 20 19:13:17.989074 containerd[1472]: time="2025-06-20T19:13:17.989033506Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 19:13:17.989177 containerd[1472]: time="2025-06-20T19:13:17.989096719Z" level=info msg="RemovePodSandbox \"93a83fcf5f23bc14e299b82c6e49944b8ed37f8f1813da57590f567448865ce4\" returns successfully" Jun 20 19:13:18.193919 kubelet[2646]: E0620 19:13:18.193842 2646 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:13:18.372182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edd6a95f12bbbe1cccd5dd6fcd69a48d33266c9abdc2d1aece7e2966921a47ce-rootfs.mount: Deactivated successfully. Jun 20 19:13:18.534953 containerd[1472]: time="2025-06-20T19:13:18.534533452Z" level=info msg="CreateContainer within sandbox \"467e99c30d698871e820ab3ef13fead9e745c2b5befc32abbf3fd3c19d865b7d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:13:18.562852 containerd[1472]: time="2025-06-20T19:13:18.562671950Z" level=info msg="CreateContainer within sandbox \"467e99c30d698871e820ab3ef13fead9e745c2b5befc32abbf3fd3c19d865b7d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ea775c994f13af02c06ffd67a0e6e567d10a5e04fd080039eb4566784baab715\"" Jun 20 19:13:18.566324 containerd[1472]: time="2025-06-20T19:13:18.565539553Z" level=info msg="StartContainer for \"ea775c994f13af02c06ffd67a0e6e567d10a5e04fd080039eb4566784baab715\"" Jun 20 19:13:18.624484 systemd[1]: Started cri-containerd-ea775c994f13af02c06ffd67a0e6e567d10a5e04fd080039eb4566784baab715.scope - libcontainer container ea775c994f13af02c06ffd67a0e6e567d10a5e04fd080039eb4566784baab715. Jun 20 19:13:18.670597 containerd[1472]: time="2025-06-20T19:13:18.670534720Z" level=info msg="StartContainer for \"ea775c994f13af02c06ffd67a0e6e567d10a5e04fd080039eb4566784baab715\" returns successfully" Jun 20 19:13:18.672779 systemd[1]: cri-containerd-ea775c994f13af02c06ffd67a0e6e567d10a5e04fd080039eb4566784baab715.scope: Deactivated successfully. Jun 20 19:13:18.708200 containerd[1472]: time="2025-06-20T19:13:18.708107963Z" level=info msg="shim disconnected" id=ea775c994f13af02c06ffd67a0e6e567d10a5e04fd080039eb4566784baab715 namespace=k8s.io Jun 20 19:13:18.708200 containerd[1472]: time="2025-06-20T19:13:18.708185231Z" level=warning msg="cleaning up after shim disconnected" id=ea775c994f13af02c06ffd67a0e6e567d10a5e04fd080039eb4566784baab715 namespace=k8s.io Jun 20 19:13:18.708200 containerd[1472]: time="2025-06-20T19:13:18.708201058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:13:19.374739 systemd[1]: run-containerd-runc-k8s.io-ea775c994f13af02c06ffd67a0e6e567d10a5e04fd080039eb4566784baab715-runc.jvVkkD.mount: Deactivated successfully. Jun 20 19:13:19.374914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea775c994f13af02c06ffd67a0e6e567d10a5e04fd080039eb4566784baab715-rootfs.mount: Deactivated successfully. Jun 20 19:13:19.541082 containerd[1472]: time="2025-06-20T19:13:19.540887005Z" level=info msg="CreateContainer within sandbox \"467e99c30d698871e820ab3ef13fead9e745c2b5befc32abbf3fd3c19d865b7d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:13:19.566357 containerd[1472]: time="2025-06-20T19:13:19.564081670Z" level=info msg="CreateContainer within sandbox \"467e99c30d698871e820ab3ef13fead9e745c2b5befc32abbf3fd3c19d865b7d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5639a87c2f23b79a180b0b476c471c5febdb7945c02cdfac8fa07ef17ed2b6d8\"" Jun 20 19:13:19.567551 containerd[1472]: time="2025-06-20T19:13:19.567510633Z" level=info msg="StartContainer for \"5639a87c2f23b79a180b0b476c471c5febdb7945c02cdfac8fa07ef17ed2b6d8\"" Jun 20 19:13:19.618502 systemd[1]: Started cri-containerd-5639a87c2f23b79a180b0b476c471c5febdb7945c02cdfac8fa07ef17ed2b6d8.scope - libcontainer container 5639a87c2f23b79a180b0b476c471c5febdb7945c02cdfac8fa07ef17ed2b6d8. Jun 20 19:13:19.652146 systemd[1]: cri-containerd-5639a87c2f23b79a180b0b476c471c5febdb7945c02cdfac8fa07ef17ed2b6d8.scope: Deactivated successfully. Jun 20 19:13:19.653536 containerd[1472]: time="2025-06-20T19:13:19.652704913Z" level=info msg="StartContainer for \"5639a87c2f23b79a180b0b476c471c5febdb7945c02cdfac8fa07ef17ed2b6d8\" returns successfully" Jun 20 19:13:19.686503 containerd[1472]: time="2025-06-20T19:13:19.686415192Z" level=info msg="shim disconnected" id=5639a87c2f23b79a180b0b476c471c5febdb7945c02cdfac8fa07ef17ed2b6d8 namespace=k8s.io Jun 20 19:13:19.686503 containerd[1472]: time="2025-06-20T19:13:19.686501205Z" level=warning msg="cleaning up after shim disconnected" id=5639a87c2f23b79a180b0b476c471c5febdb7945c02cdfac8fa07ef17ed2b6d8 namespace=k8s.io Jun 20 19:13:19.686943 containerd[1472]: time="2025-06-20T19:13:19.686514763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:13:20.372539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5639a87c2f23b79a180b0b476c471c5febdb7945c02cdfac8fa07ef17ed2b6d8-rootfs.mount: Deactivated successfully. Jun 20 19:13:20.546134 containerd[1472]: time="2025-06-20T19:13:20.546064023Z" level=info msg="CreateContainer within sandbox \"467e99c30d698871e820ab3ef13fead9e745c2b5befc32abbf3fd3c19d865b7d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:13:20.585224 containerd[1472]: time="2025-06-20T19:13:20.585160445Z" level=info msg="CreateContainer within sandbox \"467e99c30d698871e820ab3ef13fead9e745c2b5befc32abbf3fd3c19d865b7d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f6e8c897753dde651df6216495acaa84b7042ca459e93c7493cdd0bcbac8585f\"" Jun 20 19:13:20.586931 containerd[1472]: time="2025-06-20T19:13:20.586799901Z" level=info msg="StartContainer for \"f6e8c897753dde651df6216495acaa84b7042ca459e93c7493cdd0bcbac8585f\"" Jun 20 19:13:20.638484 systemd[1]: Started cri-containerd-f6e8c897753dde651df6216495acaa84b7042ca459e93c7493cdd0bcbac8585f.scope - libcontainer container f6e8c897753dde651df6216495acaa84b7042ca459e93c7493cdd0bcbac8585f. Jun 20 19:13:20.679343 containerd[1472]: time="2025-06-20T19:13:20.678997091Z" level=info msg="StartContainer for \"f6e8c897753dde651df6216495acaa84b7042ca459e93c7493cdd0bcbac8585f\" returns successfully" Jun 20 19:13:21.184363 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jun 20 19:13:21.397826 kubelet[2646]: I0620 19:13:21.396170 2646 setters.go:618] "Node became not ready" node="ci-4230-2-0-1ba50a4580a8ddc96621.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T19:13:21Z","lastTransitionTime":"2025-06-20T19:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 19:13:24.443502 systemd-networkd[1381]: lxc_health: Link UP Jun 20 19:13:24.452900 systemd-networkd[1381]: lxc_health: Gained carrier Jun 20 19:13:24.511296 kubelet[2646]: I0620 19:13:24.509636 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q9cg2" podStartSLOduration=8.509611673 podStartE2EDuration="8.509611673s" podCreationTimestamp="2025-06-20 19:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:13:21.568574685 +0000 UTC m=+123.797185220" watchObservedRunningTime="2025-06-20 19:13:24.509611673 +0000 UTC m=+126.738222209" Jun 20 19:13:25.848584 systemd-networkd[1381]: lxc_health: Gained IPv6LL Jun 20 19:13:28.629448 ntpd[1440]: Listen normally on 14 lxc_health [fe80::dc8f:d6ff:fe8a:e8fd%14]:123 Jun 20 19:13:28.630308 ntpd[1440]: 20 Jun 19:13:28 ntpd[1440]: Listen normally on 14 lxc_health [fe80::dc8f:d6ff:fe8a:e8fd%14]:123 Jun 20 19:13:30.502733 sshd[4554]: Connection closed by 147.75.109.163 port 49982 Jun 20 19:13:30.504211 sshd-session[4552]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:30.512627 systemd[1]: sshd@25-10.128.0.9:22-147.75.109.163:49982.service: Deactivated successfully. Jun 20 19:13:30.515910 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 19:13:30.517173 systemd-logind[1458]: Session 26 logged out. Waiting for processes to exit. Jun 20 19:13:30.518718 systemd-logind[1458]: Removed session 26.