Jun 20 19:10:42.146183 kernel: Linux version 6.6.94-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:12:40 -00 2025 Jun 20 19:10:42.146241 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 19:10:42.146261 kernel: BIOS-provided physical RAM map: Jun 20 19:10:42.146276 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jun 20 19:10:42.146291 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jun 20 19:10:42.146305 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jun 20 19:10:42.146323 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jun 20 19:10:42.146338 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jun 20 19:10:42.146358 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd32afff] usable Jun 20 19:10:42.146373 kernel: BIOS-e820: [mem 0x00000000bd32b000-0x00000000bd332fff] ACPI data Jun 20 19:10:42.146388 kernel: BIOS-e820: [mem 0x00000000bd333000-0x00000000bf8ecfff] usable Jun 20 19:10:42.146402 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jun 20 19:10:42.146416 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jun 20 19:10:42.146430 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jun 20 19:10:42.146452 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jun 20 19:10:42.146476 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jun 20 19:10:42.146513 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jun 20 19:10:42.146530 kernel: NX (Execute Disable) protection: active Jun 20 19:10:42.146546 kernel: APIC: Static calls initialized Jun 20 19:10:42.146562 kernel: efi: EFI v2.7 by EDK II Jun 20 19:10:42.146579 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32b018 Jun 20 19:10:42.146595 kernel: random: crng init done Jun 20 19:10:42.146612 kernel: secureboot: Secure boot disabled Jun 20 19:10:42.146626 kernel: SMBIOS 2.4 present. Jun 20 19:10:42.146646 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Jun 20 19:10:42.146660 kernel: Hypervisor detected: KVM Jun 20 19:10:42.146676 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 19:10:42.146691 kernel: kvm-clock: using sched offset of 14282973276 cycles Jun 20 19:10:42.146705 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 19:10:42.146725 kernel: tsc: Detected 2299.998 MHz processor Jun 20 19:10:42.146746 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:10:42.146767 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:10:42.146788 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jun 20 19:10:42.146807 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jun 20 19:10:42.146827 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:10:42.146844 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jun 20 19:10:42.146860 kernel: Using GB pages for direct mapping Jun 20 19:10:42.146876 kernel: ACPI: Early table checksum verification disabled Jun 20 19:10:42.146893 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jun 20 19:10:42.146910 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jun 20 19:10:42.146934 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jun 20 19:10:42.146955 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jun 20 19:10:42.146972 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jun 20 19:10:42.146989 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Jun 20 19:10:42.147007 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jun 20 19:10:42.147024 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jun 20 19:10:42.147041 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jun 20 19:10:42.147059 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jun 20 19:10:42.147080 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jun 20 19:10:42.147097 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jun 20 19:10:42.147114 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jun 20 19:10:42.147131 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jun 20 19:10:42.147148 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jun 20 19:10:42.147164 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jun 20 19:10:42.147182 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jun 20 19:10:42.147199 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jun 20 19:10:42.147225 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jun 20 19:10:42.147243 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jun 20 19:10:42.147260 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 20 19:10:42.147277 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 20 19:10:42.147294 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 20 19:10:42.147311 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jun 20 19:10:42.147329 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jun 20 19:10:42.147351 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jun 20 19:10:42.147369 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jun 20 19:10:42.147391 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jun 20 19:10:42.147408 kernel: Zone ranges: Jun 20 19:10:42.147425 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:10:42.147443 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 20 19:10:42.147460 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jun 20 19:10:42.147501 kernel: Movable zone start for each node Jun 20 19:10:42.147531 kernel: Early memory node ranges Jun 20 19:10:42.147548 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jun 20 19:10:42.147565 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jun 20 19:10:42.147582 kernel: node 0: [mem 0x0000000000100000-0x00000000bd32afff] Jun 20 19:10:42.147604 kernel: node 0: [mem 0x00000000bd333000-0x00000000bf8ecfff] Jun 20 19:10:42.147622 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jun 20 19:10:42.147639 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jun 20 19:10:42.147656 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jun 20 19:10:42.147674 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:10:42.147691 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jun 20 19:10:42.147708 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jun 20 19:10:42.147725 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Jun 20 19:10:42.147742 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jun 20 19:10:42.147763 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jun 20 19:10:42.147780 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 20 19:10:42.147798 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 19:10:42.147815 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:10:42.147832 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 19:10:42.147850 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:10:42.147867 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 19:10:42.147884 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 19:10:42.147901 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:10:42.147922 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 20 19:10:42.147940 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jun 20 19:10:42.147957 kernel: Booting paravirtualized kernel on KVM Jun 20 19:10:42.147974 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:10:42.147992 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 19:10:42.148009 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jun 20 19:10:42.148026 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jun 20 19:10:42.148043 kernel: pcpu-alloc: [0] 0 1 Jun 20 19:10:42.148060 kernel: kvm-guest: PV spinlocks enabled Jun 20 19:10:42.148081 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 19:10:42.148101 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 19:10:42.148119 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:10:42.148136 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 20 19:10:42.148153 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:10:42.148171 kernel: Fallback order for Node 0: 0 Jun 20 19:10:42.148188 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Jun 20 19:10:42.148205 kernel: Policy zone: Normal Jun 20 19:10:42.148226 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:10:42.148243 kernel: software IO TLB: area num 2. Jun 20 19:10:42.148261 kernel: Memory: 7511328K/7860552K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43488K init, 1588K bss, 348968K reserved, 0K cma-reserved) Jun 20 19:10:42.148278 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:10:42.148295 kernel: Kernel/User page tables isolation: enabled Jun 20 19:10:42.148313 kernel: ftrace: allocating 37938 entries in 149 pages Jun 20 19:10:42.148330 kernel: ftrace: allocated 149 pages with 4 groups Jun 20 19:10:42.148348 kernel: Dynamic Preempt: voluntary Jun 20 19:10:42.148382 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:10:42.148403 kernel: rcu: RCU event tracing is enabled. Jun 20 19:10:42.148421 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:10:42.148440 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:10:42.148462 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:10:42.148502 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:10:42.148521 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:10:42.148540 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:10:42.148558 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 20 19:10:42.148581 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:10:42.148598 kernel: Console: colour dummy device 80x25 Jun 20 19:10:42.148617 kernel: printk: console [ttyS0] enabled Jun 20 19:10:42.148635 kernel: ACPI: Core revision 20230628 Jun 20 19:10:42.148654 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:10:42.148671 kernel: x2apic enabled Jun 20 19:10:42.148690 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:10:42.148708 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jun 20 19:10:42.148726 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jun 20 19:10:42.148749 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jun 20 19:10:42.148767 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jun 20 19:10:42.148786 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jun 20 19:10:42.148804 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:10:42.148822 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jun 20 19:10:42.148841 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jun 20 19:10:42.148859 kernel: Spectre V2 : Mitigation: IBRS Jun 20 19:10:42.148877 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:10:42.148896 kernel: RETBleed: Mitigation: IBRS Jun 20 19:10:42.148927 kernel: Spectre V2 : User space: Vulnerable Jun 20 19:10:42.148945 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 20 19:10:42.148963 kernel: MDS: Mitigation: Clear CPU buffers Jun 20 19:10:42.148982 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 20 19:10:42.149000 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 19:10:42.149018 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:10:42.149037 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:10:42.149055 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:10:42.149077 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:10:42.149096 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jun 20 19:10:42.149114 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:10:42.149132 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:10:42.149151 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jun 20 19:10:42.149169 kernel: landlock: Up and running. Jun 20 19:10:42.149188 kernel: SELinux: Initializing. Jun 20 19:10:42.149206 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:10:42.149225 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:10:42.149248 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jun 20 19:10:42.149266 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:10:42.149284 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:10:42.149303 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:10:42.149322 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jun 20 19:10:42.149346 kernel: signal: max sigframe size: 1776 Jun 20 19:10:42.149364 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:10:42.149383 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:10:42.149401 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 19:10:42.149423 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:10:42.149441 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:10:42.149459 kernel: .... node #0, CPUs: #1 Jun 20 19:10:42.149496 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jun 20 19:10:42.149525 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 20 19:10:42.149544 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:10:42.149562 kernel: smpboot: Max logical packages: 1 Jun 20 19:10:42.149581 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jun 20 19:10:42.149599 kernel: devtmpfs: initialized Jun 20 19:10:42.149622 kernel: x86/mm: Memory block size: 128MB Jun 20 19:10:42.149640 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jun 20 19:10:42.149658 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:10:42.149676 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:10:42.149695 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:10:42.149713 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:10:42.149731 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:10:42.149749 kernel: audit: type=2000 audit(1750446640.413:1): state=initialized audit_enabled=0 res=1 Jun 20 19:10:42.149767 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:10:42.149788 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:10:42.149807 kernel: cpuidle: using governor menu Jun 20 19:10:42.149826 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:10:42.149844 kernel: dca service started, version 1.12.1 Jun 20 19:10:42.149862 kernel: PCI: Using configuration type 1 for base access Jun 20 19:10:42.149881 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:10:42.149899 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:10:42.149917 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:10:42.149940 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:10:42.149958 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:10:42.149976 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:10:42.149995 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:10:42.150013 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:10:42.150032 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jun 20 19:10:42.150050 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 20 19:10:42.150068 kernel: ACPI: Interpreter enabled Jun 20 19:10:42.150086 kernel: ACPI: PM: (supports S0 S3 S5) Jun 20 19:10:42.150104 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:10:42.150127 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:10:42.150146 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 20 19:10:42.150164 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jun 20 19:10:42.150183 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 19:10:42.150526 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:10:42.150738 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 20 19:10:42.150925 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 20 19:10:42.150954 kernel: PCI host bridge to bus 0000:00 Jun 20 19:10:42.151135 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 19:10:42.151305 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 19:10:42.151474 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 19:10:42.151658 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jun 20 19:10:42.151822 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 19:10:42.152030 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 20 19:10:42.152240 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jun 20 19:10:42.152455 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 20 19:10:42.152686 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 20 19:10:42.152900 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jun 20 19:10:42.153109 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jun 20 19:10:42.153305 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jun 20 19:10:42.153567 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jun 20 19:10:42.153776 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jun 20 19:10:42.153972 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jun 20 19:10:42.154173 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jun 20 19:10:42.154358 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jun 20 19:10:42.154570 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jun 20 19:10:42.154593 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 19:10:42.154619 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 19:10:42.154637 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 19:10:42.154656 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 19:10:42.154674 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 20 19:10:42.154692 kernel: iommu: Default domain type: Translated Jun 20 19:10:42.154710 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:10:42.154729 kernel: efivars: Registered efivars operations Jun 20 19:10:42.154747 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:10:42.154765 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 19:10:42.154787 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jun 20 19:10:42.154805 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jun 20 19:10:42.154822 kernel: e820: reserve RAM buffer [mem 0xbd32b000-0xbfffffff] Jun 20 19:10:42.154840 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jun 20 19:10:42.154857 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jun 20 19:10:42.154875 kernel: vgaarb: loaded Jun 20 19:10:42.154892 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 19:10:42.154909 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:10:42.154929 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:10:42.154951 kernel: pnp: PnP ACPI init Jun 20 19:10:42.154969 kernel: pnp: PnP ACPI: found 7 devices Jun 20 19:10:42.154989 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:10:42.155005 kernel: NET: Registered PF_INET protocol family Jun 20 19:10:42.155022 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 19:10:42.155041 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 20 19:10:42.155057 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:10:42.155073 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:10:42.155090 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 20 19:10:42.155112 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 20 19:10:42.155130 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 19:10:42.155147 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 19:10:42.155166 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:10:42.155183 kernel: NET: Registered PF_XDP protocol family Jun 20 19:10:42.155385 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 19:10:42.155604 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 19:10:42.155849 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 19:10:42.156026 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jun 20 19:10:42.156221 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 20 19:10:42.156247 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:10:42.156266 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 20 19:10:42.156286 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jun 20 19:10:42.156306 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 20 19:10:42.156326 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jun 20 19:10:42.156350 kernel: clocksource: Switched to clocksource tsc Jun 20 19:10:42.156369 kernel: Initialise system trusted keyrings Jun 20 19:10:42.156387 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 20 19:10:42.156406 kernel: Key type asymmetric registered Jun 20 19:10:42.156425 kernel: Asymmetric key parser 'x509' registered Jun 20 19:10:42.156444 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 20 19:10:42.156470 kernel: io scheduler mq-deadline registered Jun 20 19:10:42.156565 kernel: io scheduler kyber registered Jun 20 19:10:42.156585 kernel: io scheduler bfq registered Jun 20 19:10:42.156604 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:10:42.156629 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 20 19:10:42.156851 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jun 20 19:10:42.156878 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jun 20 19:10:42.157070 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jun 20 19:10:42.157094 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 20 19:10:42.157278 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jun 20 19:10:42.157303 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:10:42.157322 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:10:42.157348 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 20 19:10:42.157367 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jun 20 19:10:42.157385 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jun 20 19:10:42.157612 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jun 20 19:10:42.157640 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 19:10:42.157660 kernel: i8042: Warning: Keylock active Jun 20 19:10:42.157679 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 19:10:42.157699 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 19:10:42.157891 kernel: rtc_cmos 00:00: RTC can wake from S4 Jun 20 19:10:42.158068 kernel: rtc_cmos 00:00: registered as rtc0 Jun 20 19:10:42.158254 kernel: rtc_cmos 00:00: setting system clock to 2025-06-20T19:10:41 UTC (1750446641) Jun 20 19:10:42.158427 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jun 20 19:10:42.158450 kernel: intel_pstate: CPU model not supported Jun 20 19:10:42.158480 kernel: pstore: Using crash dump compression: deflate Jun 20 19:10:42.158543 kernel: pstore: Registered efi_pstore as persistent store backend Jun 20 19:10:42.158563 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:10:42.158587 kernel: Segment Routing with IPv6 Jun 20 19:10:42.158606 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:10:42.158625 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:10:42.158644 kernel: Key type dns_resolver registered Jun 20 19:10:42.158662 kernel: IPI shorthand broadcast: enabled Jun 20 19:10:42.158682 kernel: sched_clock: Marking stable (938004540, 169839368)->(1184419946, -76576038) Jun 20 19:10:42.158701 kernel: registered taskstats version 1 Jun 20 19:10:42.158719 kernel: Loading compiled-in X.509 certificates Jun 20 19:10:42.158738 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.94-flatcar: 583832681762bbd3c2cbcca308896cbba88c4497' Jun 20 19:10:42.158760 kernel: Key type .fscrypt registered Jun 20 19:10:42.158779 kernel: Key type fscrypt-provisioning registered Jun 20 19:10:42.158798 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:10:42.158816 kernel: ima: No architecture policies found Jun 20 19:10:42.158835 kernel: clk: Disabling unused clocks Jun 20 19:10:42.158854 kernel: Freeing unused kernel image (initmem) memory: 43488K Jun 20 19:10:42.158873 kernel: Write protecting the kernel read-only data: 38912k Jun 20 19:10:42.158891 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jun 20 19:10:42.158910 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jun 20 19:10:42.158933 kernel: Run /init as init process Jun 20 19:10:42.158951 kernel: with arguments: Jun 20 19:10:42.158970 kernel: /init Jun 20 19:10:42.158988 kernel: with environment: Jun 20 19:10:42.159006 kernel: HOME=/ Jun 20 19:10:42.159025 kernel: TERM=linux Jun 20 19:10:42.159043 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:10:42.159064 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:10:42.159092 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:10:42.159113 systemd[1]: Detected virtualization google. Jun 20 19:10:42.159130 systemd[1]: Detected architecture x86-64. Jun 20 19:10:42.159145 systemd[1]: Running in initrd. Jun 20 19:10:42.159160 systemd[1]: No hostname configured, using default hostname. Jun 20 19:10:42.159178 systemd[1]: Hostname set to . Jun 20 19:10:42.159194 systemd[1]: Initializing machine ID from random generator. Jun 20 19:10:42.159217 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:10:42.159235 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:10:42.159253 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:10:42.159271 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:10:42.159291 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:10:42.159317 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:10:42.159335 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:10:42.159360 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:10:42.159378 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:10:42.159415 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:10:42.159438 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:10:42.159458 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:10:42.159526 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:10:42.159561 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:10:42.159580 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:10:42.159599 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:10:42.159617 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:10:42.159635 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:10:42.159654 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:10:42.159672 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:10:42.159691 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:10:42.159710 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:10:42.159733 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:10:42.159752 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:10:42.159769 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:10:42.159787 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:10:42.159804 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:10:42.159823 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:10:42.159842 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:10:42.159882 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:10:42.159988 systemd-journald[184]: Collecting audit messages is disabled. Jun 20 19:10:42.160038 systemd-journald[184]: Journal started Jun 20 19:10:42.160077 systemd-journald[184]: Runtime Journal (/run/log/journal/0e77551818c74526bdeb1881d4a03cbf) is 8M, max 148.6M, 140.6M free. Jun 20 19:10:42.160390 systemd-modules-load[185]: Inserted module 'overlay' Jun 20 19:10:42.180732 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:10:42.189692 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:10:42.234723 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:10:42.234780 kernel: Bridge firewalling registered Jun 20 19:10:42.199164 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:10:42.222153 systemd-modules-load[185]: Inserted module 'br_netfilter' Jun 20 19:10:42.246179 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:10:42.266113 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:10:42.274147 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:10:42.295919 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:10:42.323718 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:10:42.354746 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:10:42.371851 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:10:42.373444 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:10:42.386602 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:10:42.394723 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:10:42.395560 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:10:42.403836 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:10:42.408981 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:10:42.421244 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:10:42.455382 systemd-resolved[215]: Positive Trust Anchors: Jun 20 19:10:42.455406 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:10:42.455477 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:10:42.460469 systemd-resolved[215]: Defaulting to hostname 'linux'. Jun 20 19:10:42.463790 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:10:42.475889 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:10:42.587668 dracut-cmdline[220]: dracut-dracut-053 Jun 20 19:10:42.587668 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 19:10:42.486000 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:10:42.672538 kernel: SCSI subsystem initialized Jun 20 19:10:42.689536 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:10:42.705525 kernel: iscsi: registered transport (tcp) Jun 20 19:10:42.738563 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:10:42.738657 kernel: QLogic iSCSI HBA Driver Jun 20 19:10:42.791975 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:10:42.798720 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:10:42.877259 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:10:42.877354 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:10:42.877410 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 20 19:10:42.935550 kernel: raid6: avx2x4 gen() 18010 MB/s Jun 20 19:10:42.956579 kernel: raid6: avx2x2 gen() 17459 MB/s Jun 20 19:10:42.982608 kernel: raid6: avx2x1 gen() 13592 MB/s Jun 20 19:10:42.982700 kernel: raid6: using algorithm avx2x4 gen() 18010 MB/s Jun 20 19:10:43.009635 kernel: raid6: .... xor() 7587 MB/s, rmw enabled Jun 20 19:10:43.009713 kernel: raid6: using avx2x2 recovery algorithm Jun 20 19:10:43.039529 kernel: xor: automatically using best checksumming function avx Jun 20 19:10:43.213533 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:10:43.227059 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:10:43.241773 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:10:43.291697 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jun 20 19:10:43.299831 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:10:43.331728 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:10:43.373828 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jun 20 19:10:43.410981 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:10:43.435744 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:10:43.549263 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:10:43.568749 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:10:43.620368 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:10:43.622747 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:10:43.639086 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:10:43.679519 kernel: scsi host0: Virtio SCSI HBA Jun 20 19:10:43.726716 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jun 20 19:10:43.760695 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:10:43.787663 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:10:43.785499 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:10:43.810519 kernel: AVX2 version of gcm_enc/dec engaged. Jun 20 19:10:43.818428 kernel: AES CTR mode by8 optimization enabled Jun 20 19:10:43.845663 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:10:43.847693 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:10:43.884814 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jun 20 19:10:43.885318 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jun 20 19:10:43.885617 kernel: sd 0:0:1:0: [sda] Write Protect is off Jun 20 19:10:43.902449 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jun 20 19:10:43.902917 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 19:10:43.919744 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 19:10:43.919831 kernel: GPT:17805311 != 25165823 Jun 20 19:10:43.919869 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 19:10:43.925679 kernel: GPT:17805311 != 25165823 Jun 20 19:10:43.925723 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 19:10:43.925747 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:10:43.939873 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jun 20 19:10:43.944918 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:10:43.970654 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:10:43.971409 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:10:44.023751 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (444) Jun 20 19:10:44.023798 kernel: BTRFS: device fsid 5ff786f3-14e2-4689-ad32-ff903cf13f91 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (445) Jun 20 19:10:43.982108 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:10:44.047874 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:10:44.058331 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:10:44.058835 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:10:44.111774 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:10:44.148924 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jun 20 19:10:44.179543 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jun 20 19:10:44.192261 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jun 20 19:10:44.220986 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jun 20 19:10:44.233690 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jun 20 19:10:44.265738 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:10:44.285283 disk-uuid[540]: Primary Header is updated. Jun 20 19:10:44.285283 disk-uuid[540]: Secondary Entries is updated. Jun 20 19:10:44.285283 disk-uuid[540]: Secondary Header is updated. Jun 20 19:10:44.332171 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:10:44.298851 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:10:44.380322 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:10:45.346523 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:10:45.346634 disk-uuid[541]: The operation has completed successfully. Jun 20 19:10:45.458619 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:10:45.458808 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:10:45.501814 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:10:45.537160 sh[564]: Success Jun 20 19:10:45.561548 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 20 19:10:45.667645 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:10:45.677030 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:10:45.699407 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:10:45.757923 kernel: BTRFS info (device dm-0): first mount of filesystem 5ff786f3-14e2-4689-ad32-ff903cf13f91 Jun 20 19:10:45.758061 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:10:45.758091 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 20 19:10:45.774535 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 20 19:10:45.774638 kernel: BTRFS info (device dm-0): using free space tree Jun 20 19:10:45.812605 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 20 19:10:45.820009 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:10:45.829810 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:10:45.834817 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:10:45.851898 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:10:45.925398 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:10:45.925534 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:10:45.925565 kernel: BTRFS info (device sda6): using free space tree Jun 20 19:10:45.944196 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 19:10:45.944298 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 19:10:45.959661 kernel: BTRFS info (device sda6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:10:45.973747 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:10:46.001826 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:10:46.022075 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:10:46.058835 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:10:46.153711 systemd-networkd[743]: lo: Link UP Jun 20 19:10:46.153725 systemd-networkd[743]: lo: Gained carrier Jun 20 19:10:46.158144 systemd-networkd[743]: Enumeration completed Jun 20 19:10:46.158711 systemd-networkd[743]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:10:46.158719 systemd-networkd[743]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:10:46.214502 ignition[723]: Ignition 2.20.0 Jun 20 19:10:46.160139 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:10:46.214525 ignition[723]: Stage: fetch-offline Jun 20 19:10:46.161167 systemd-networkd[743]: eth0: Link UP Jun 20 19:10:46.214588 ignition[723]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:10:46.161175 systemd-networkd[743]: eth0: Gained carrier Jun 20 19:10:46.214605 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 20 19:10:46.161189 systemd-networkd[743]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:10:46.214777 ignition[723]: parsed url from cmdline: "" Jun 20 19:10:46.173011 systemd[1]: Reached target network.target - Network. Jun 20 19:10:46.214785 ignition[723]: no config URL provided Jun 20 19:10:46.180599 systemd-networkd[743]: eth0: DHCPv4 address 10.128.0.67/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jun 20 19:10:46.214794 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:10:46.217187 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:10:46.214809 ignition[723]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:10:46.245740 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:10:46.214820 ignition[723]: failed to fetch config: resource requires networking Jun 20 19:10:46.284747 unknown[753]: fetched base config from "system" Jun 20 19:10:46.215165 ignition[723]: Ignition finished successfully Jun 20 19:10:46.284761 unknown[753]: fetched base config from "system" Jun 20 19:10:46.269536 ignition[753]: Ignition 2.20.0 Jun 20 19:10:46.284773 unknown[753]: fetched user config from "gcp" Jun 20 19:10:46.269546 ignition[753]: Stage: fetch Jun 20 19:10:46.287784 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:10:46.269754 ignition[753]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:10:46.307860 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:10:46.269767 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 20 19:10:46.353758 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:10:46.269888 ignition[753]: parsed url from cmdline: "" Jun 20 19:10:46.373774 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:10:46.269895 ignition[753]: no config URL provided Jun 20 19:10:46.414103 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:10:46.269905 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:10:46.435165 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:10:46.269918 ignition[753]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:10:46.453726 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:10:46.269957 ignition[753]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jun 20 19:10:46.472712 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:10:46.275997 ignition[753]: GET result: OK Jun 20 19:10:46.489733 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:10:46.276066 ignition[753]: parsing config with SHA512: 4daf925b5b4a8a77d3057c42e2c8da9b9ff2e544f742493fe81228c358227577fde96205936b216e37795988f134df5434362f4aa13c9e272f46e155b9b04638 Jun 20 19:10:46.504724 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:10:46.285716 ignition[753]: fetch: fetch complete Jun 20 19:10:46.526009 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:10:46.285833 ignition[753]: fetch: fetch passed Jun 20 19:10:46.285935 ignition[753]: Ignition finished successfully Jun 20 19:10:46.350847 ignition[759]: Ignition 2.20.0 Jun 20 19:10:46.350856 ignition[759]: Stage: kargs Jun 20 19:10:46.351092 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:10:46.351106 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 20 19:10:46.352396 ignition[759]: kargs: kargs passed Jun 20 19:10:46.352462 ignition[759]: Ignition finished successfully Jun 20 19:10:46.393101 ignition[764]: Ignition 2.20.0 Jun 20 19:10:46.393111 ignition[764]: Stage: disks Jun 20 19:10:46.393343 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:10:46.393356 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 20 19:10:46.394382 ignition[764]: disks: disks passed Jun 20 19:10:46.394439 ignition[764]: Ignition finished successfully Jun 20 19:10:46.576543 systemd-fsck[774]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jun 20 19:10:46.777671 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:10:46.782773 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:10:46.940529 kernel: EXT4-fs (sda9): mounted filesystem 943f8432-3dc9-4e22-b9bd-c29bf6a1f5e1 r/w with ordered data mode. Quota mode: none. Jun 20 19:10:46.941440 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:10:46.942358 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:10:46.961679 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:10:46.999670 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:10:47.047693 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (783) Jun 20 19:10:47.047758 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:10:47.047783 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:10:47.047807 kernel: BTRFS info (device sda6): using free space tree Jun 20 19:10:47.016877 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 19:10:47.075709 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 19:10:47.075758 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 19:10:47.016974 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:10:47.017023 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:10:47.089069 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:10:47.114188 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:10:47.146775 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:10:47.278809 initrd-setup-root[807]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:10:47.289740 initrd-setup-root[814]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:10:47.300685 initrd-setup-root[821]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:10:47.311672 initrd-setup-root[828]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:10:47.470995 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:10:47.501725 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:10:47.531816 kernel: BTRFS info (device sda6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:10:47.524975 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:10:47.552931 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:10:47.592128 ignition[895]: INFO : Ignition 2.20.0 Jun 20 19:10:47.599713 ignition[895]: INFO : Stage: mount Jun 20 19:10:47.599713 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:10:47.599713 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 20 19:10:47.599713 ignition[895]: INFO : mount: mount passed Jun 20 19:10:47.599713 ignition[895]: INFO : Ignition finished successfully Jun 20 19:10:47.594290 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:10:47.600261 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:10:47.620699 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:10:47.668804 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:10:47.725538 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (907) Jun 20 19:10:47.744476 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:10:47.744592 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:10:47.744620 kernel: BTRFS info (device sda6): using free space tree Jun 20 19:10:47.768697 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 19:10:47.768794 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 19:10:47.772790 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:10:47.811408 ignition[924]: INFO : Ignition 2.20.0 Jun 20 19:10:47.811408 ignition[924]: INFO : Stage: files Jun 20 19:10:47.827672 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:10:47.827672 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 20 19:10:47.827672 ignition[924]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:10:47.827672 ignition[924]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:10:47.827672 ignition[924]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:10:47.884696 ignition[924]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:10:47.884696 ignition[924]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:10:47.884696 ignition[924]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:10:47.884696 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:10:47.884696 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 20 19:10:47.831153 unknown[924]: wrote ssh authorized keys file for user: core Jun 20 19:10:47.954713 systemd-networkd[743]: eth0: Gained IPv6LL Jun 20 19:10:47.969784 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:10:48.832869 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:10:48.850726 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:10:48.850726 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 19:10:49.171177 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 19:10:49.309219 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:10:49.324681 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:10:49.324681 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:10:49.324681 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:10:49.324681 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:10:49.324681 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:10:49.324681 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:10:49.324681 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:10:49.324681 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:10:49.324681 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:10:49.324681 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:10:49.324681 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:10:49.324681 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:10:49.324681 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:10:49.324681 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 20 19:10:49.759272 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 19:10:50.094646 ignition[924]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:10:50.094646 ignition[924]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 19:10:50.133754 ignition[924]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:10:50.133754 ignition[924]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:10:50.133754 ignition[924]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 19:10:50.133754 ignition[924]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:10:50.133754 ignition[924]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:10:50.133754 ignition[924]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:10:50.133754 ignition[924]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:10:50.133754 ignition[924]: INFO : files: files passed Jun 20 19:10:50.133754 ignition[924]: INFO : Ignition finished successfully Jun 20 19:10:50.100096 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:10:50.119908 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:10:50.150668 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:10:50.200182 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:10:50.354697 initrd-setup-root-after-ignition[952]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:10:50.354697 initrd-setup-root-after-ignition[952]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:10:50.200326 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:10:50.422744 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:10:50.225155 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:10:50.249137 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:10:50.280776 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:10:50.354445 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:10:50.354643 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:10:50.367026 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:10:50.390736 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:10:50.412894 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:10:50.419826 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:10:50.473336 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:10:50.500832 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:10:50.545075 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:10:50.547106 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:10:50.581047 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:10:50.593084 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:10:50.593301 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:10:50.634123 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:10:50.644112 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:10:50.670003 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:10:50.679191 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:10:50.716032 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:10:50.743074 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:10:50.772088 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:10:50.800054 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:10:50.812139 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:10:50.829089 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:10:50.846028 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:10:50.846239 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:10:50.879092 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:10:50.889080 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:10:50.928813 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:10:50.929193 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:10:50.938011 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:10:50.938216 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:10:50.987771 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:10:50.988220 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:10:50.998201 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:10:50.998426 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:10:51.043776 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:10:51.064517 ignition[977]: INFO : Ignition 2.20.0 Jun 20 19:10:51.064517 ignition[977]: INFO : Stage: umount Jun 20 19:10:51.091721 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:10:51.091721 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 20 19:10:51.091721 ignition[977]: INFO : umount: umount passed Jun 20 19:10:51.091721 ignition[977]: INFO : Ignition finished successfully Jun 20 19:10:51.065669 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:10:51.065952 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:10:51.085053 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:10:51.102060 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:10:51.102321 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:10:51.120146 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:10:51.120389 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:10:51.157598 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:10:51.159009 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:10:51.159135 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:10:51.174370 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:10:51.174569 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:10:51.198411 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:10:51.198603 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:10:51.218245 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:10:51.218317 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:10:51.226999 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:10:51.227080 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:10:51.244990 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:10:51.245070 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:10:51.261960 systemd[1]: Stopped target network.target - Network. Jun 20 19:10:51.278901 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:10:51.278996 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:10:51.294990 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:10:51.313902 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:10:51.317673 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:10:51.340822 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:10:51.349910 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:10:51.365951 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:10:51.366017 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:10:51.400889 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:10:51.400960 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:10:51.419864 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:10:51.419957 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:10:51.438897 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:10:51.438978 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:10:51.447947 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:10:51.448028 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:10:51.465140 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:10:51.491879 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:10:51.512235 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:10:51.512391 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:10:51.533206 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:10:51.533580 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:10:51.533753 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:10:51.541373 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:10:51.543053 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:10:51.543137 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:10:51.576655 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:10:51.579849 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:10:51.579939 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:10:51.628811 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:10:51.628896 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:10:51.639060 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:10:51.639148 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:10:52.083752 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jun 20 19:10:51.656819 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:10:51.656917 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:10:51.676051 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:10:51.697101 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:10:51.697215 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:10:51.697873 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:10:51.698036 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:10:51.723851 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:10:51.723927 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:10:51.745794 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:10:51.745868 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:10:51.764768 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:10:51.764965 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:10:51.794710 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:10:51.794843 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:10:51.824754 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:10:51.824896 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:10:51.861776 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:10:51.884667 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:10:51.884831 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:10:51.906965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:10:51.907045 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:10:51.928114 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:10:51.928211 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:10:51.928879 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:10:51.929003 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:10:51.947086 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:10:51.947217 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:10:51.971031 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:10:51.992771 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:10:52.029560 systemd[1]: Switching root. Jun 20 19:10:52.411689 systemd-journald[184]: Journal stopped Jun 20 19:10:55.246174 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:10:55.246232 kernel: SELinux: policy capability open_perms=1 Jun 20 19:10:55.246247 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:10:55.246259 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:10:55.246270 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:10:55.246281 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:10:55.246300 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:10:55.246311 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:10:55.246330 kernel: audit: type=1403 audit(1750446652.800:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:10:55.246350 systemd[1]: Successfully loaded SELinux policy in 95.241ms. Jun 20 19:10:55.246367 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.350ms. Jun 20 19:10:55.246381 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:10:55.246394 systemd[1]: Detected virtualization google. Jun 20 19:10:55.246406 systemd[1]: Detected architecture x86-64. Jun 20 19:10:55.246423 systemd[1]: Detected first boot. Jun 20 19:10:55.246436 systemd[1]: Initializing machine ID from random generator. Jun 20 19:10:55.246450 zram_generator::config[1020]: No configuration found. Jun 20 19:10:55.246464 kernel: Guest personality initialized and is inactive Jun 20 19:10:55.246481 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 19:10:55.246631 kernel: Initialized host personality Jun 20 19:10:55.246656 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:10:55.246672 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:10:55.246688 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:10:55.246702 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:10:55.246716 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:10:55.246729 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:10:55.246743 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:10:55.246756 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:10:55.246774 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:10:55.246788 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:10:55.246802 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:10:55.246819 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:10:55.246833 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:10:55.246846 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:10:55.246860 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:10:55.246878 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:10:55.246893 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:10:55.246909 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:10:55.246925 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:10:55.246939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:10:55.246958 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:10:55.246972 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:10:55.246986 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:10:55.247003 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:10:55.247019 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:10:55.247033 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:10:55.247047 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:10:55.247060 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:10:55.247075 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:10:55.247088 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:10:55.247102 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:10:55.247119 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:10:55.247134 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:10:55.247148 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:10:55.247164 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:10:55.247182 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:10:55.247197 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:10:55.247211 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:10:55.247226 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:10:55.247240 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:10:55.247254 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:10:55.247305 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:10:55.247320 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:10:55.247339 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:10:55.247354 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:10:55.247368 systemd[1]: Reached target machines.target - Containers. Jun 20 19:10:55.247382 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:10:55.247399 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:10:55.247426 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:10:55.247449 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:10:55.247472 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:10:55.247521 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:10:55.247554 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:10:55.247573 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:10:55.247587 kernel: fuse: init (API version 7.39) Jun 20 19:10:55.247601 kernel: ACPI: bus type drm_connector registered Jun 20 19:10:55.247615 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:10:55.247630 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:10:55.247644 kernel: loop: module loaded Jun 20 19:10:55.247661 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:10:55.247675 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:10:55.247689 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:10:55.247703 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:10:55.247718 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:10:55.247732 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:10:55.247746 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:10:55.247797 systemd-journald[1108]: Collecting audit messages is disabled. Jun 20 19:10:55.247839 systemd-journald[1108]: Journal started Jun 20 19:10:55.247868 systemd-journald[1108]: Runtime Journal (/run/log/journal/79cc737ba6844150979400f1492ebd59) is 8M, max 148.6M, 140.6M free. Jun 20 19:10:53.918399 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:10:53.930459 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 19:10:53.931092 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:10:55.266545 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:10:55.294528 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:10:55.325528 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:10:55.358994 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:10:55.359117 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:10:55.359529 systemd[1]: Stopped verity-setup.service. Jun 20 19:10:55.396528 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:10:55.410547 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:10:55.421246 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:10:55.431976 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:10:55.443001 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:10:55.452927 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:10:55.462930 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:10:55.472938 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:10:55.484290 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:10:55.496364 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:10:55.508173 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:10:55.508531 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:10:55.520157 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:10:55.520477 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:10:55.532138 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:10:55.532471 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:10:55.543167 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:10:55.543480 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:10:55.555109 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:10:55.555425 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:10:55.566154 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:10:55.566523 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:10:55.577169 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:10:55.588144 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:10:55.601245 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:10:55.613217 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:10:55.625162 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:10:55.650733 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:10:55.666656 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:10:55.692599 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:10:55.702732 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:10:55.702810 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:10:55.715744 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:10:55.737847 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:10:55.753776 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:10:55.763936 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:10:55.773372 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:10:55.789862 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:10:55.800731 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:10:55.808518 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:10:55.818726 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:10:55.825158 systemd-journald[1108]: Time spent on flushing to /var/log/journal/79cc737ba6844150979400f1492ebd59 is 84.179ms for 945 entries. Jun 20 19:10:55.825158 systemd-journald[1108]: System Journal (/var/log/journal/79cc737ba6844150979400f1492ebd59) is 8M, max 584.8M, 576.8M free. Jun 20 19:10:55.957115 systemd-journald[1108]: Received client request to flush runtime journal. Jun 20 19:10:55.957186 kernel: loop0: detected capacity change from 0 to 52152 Jun 20 19:10:55.840120 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:10:55.868746 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:10:55.884753 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:10:55.903234 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 20 19:10:55.923143 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:10:55.938796 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:10:55.950152 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:10:55.968648 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:10:55.981258 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:10:55.993302 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:10:56.020102 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:10:56.046772 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:10:56.058395 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:10:56.072546 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:10:56.090336 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:10:56.103454 udevadm[1146]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 20 19:10:56.122302 kernel: loop1: detected capacity change from 0 to 224512 Jun 20 19:10:56.127932 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:10:56.133008 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:10:56.170289 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Jun 20 19:10:56.170332 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Jun 20 19:10:56.182073 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:10:56.282524 kernel: loop2: detected capacity change from 0 to 138176 Jun 20 19:10:56.409560 kernel: loop3: detected capacity change from 0 to 147912 Jun 20 19:10:56.511386 kernel: loop4: detected capacity change from 0 to 52152 Jun 20 19:10:56.565811 kernel: loop5: detected capacity change from 0 to 224512 Jun 20 19:10:56.612558 kernel: loop6: detected capacity change from 0 to 138176 Jun 20 19:10:56.696411 kernel: loop7: detected capacity change from 0 to 147912 Jun 20 19:10:56.747369 (sd-merge)[1168]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jun 20 19:10:56.751819 (sd-merge)[1168]: Merged extensions into '/usr'. Jun 20 19:10:56.762620 systemd[1]: Reload requested from client PID 1144 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:10:56.762643 systemd[1]: Reloading... Jun 20 19:10:56.915574 zram_generator::config[1193]: No configuration found. Jun 20 19:10:57.116593 ldconfig[1139]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:10:57.208195 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:10:57.327977 systemd[1]: Reloading finished in 563 ms. Jun 20 19:10:57.351635 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:10:57.362193 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:10:57.374282 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:10:57.398424 systemd[1]: Starting ensure-sysext.service... Jun 20 19:10:57.407844 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:10:57.430790 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:10:57.465676 systemd[1]: Reload requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:10:57.465703 systemd[1]: Reloading... Jun 20 19:10:57.478928 systemd-udevd[1240]: Using default interface naming scheme 'v255'. Jun 20 19:10:57.481223 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:10:57.482657 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:10:57.484719 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:10:57.485784 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jun 20 19:10:57.485924 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jun 20 19:10:57.496273 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:10:57.497534 systemd-tmpfiles[1239]: Skipping /boot Jun 20 19:10:57.524664 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:10:57.524694 systemd-tmpfiles[1239]: Skipping /boot Jun 20 19:10:57.624977 zram_generator::config[1265]: No configuration found. Jun 20 19:10:57.888968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:10:57.973568 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 20 19:10:58.000538 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jun 20 19:10:58.067842 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:10:58.068256 systemd[1]: Reloading finished in 601 ms. Jun 20 19:10:58.080449 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:10:58.088530 kernel: ACPI: button: Power Button [PWRF] Jun 20 19:10:58.100532 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jun 20 19:10:58.122435 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jun 20 19:10:58.122587 kernel: ACPI: button: Sleep Button [SLPF] Jun 20 19:10:58.134897 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:10:58.186826 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jun 20 19:10:58.196902 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:10:58.203865 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:10:58.207650 kernel: EDAC MC: Ver: 3.0.0 Jun 20 19:10:58.236801 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:10:58.257521 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1292) Jun 20 19:10:58.262406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:10:58.270936 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:10:58.273756 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:10:58.290777 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:10:58.308847 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:10:58.327760 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:10:58.346724 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 20 19:10:58.357105 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:10:58.357234 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:10:58.365708 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:10:58.385818 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:10:58.408284 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:10:58.419692 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:10:58.432264 augenrules[1370]: No rules Jun 20 19:10:58.436974 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:10:58.449663 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:10:58.454661 systemd[1]: Finished ensure-sysext.service. Jun 20 19:10:58.462067 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:10:58.462460 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:10:58.473410 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:10:58.474130 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:10:58.486436 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:10:58.486792 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:10:58.497266 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:10:58.509182 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:10:58.509475 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:10:58.521173 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:10:58.521470 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:10:58.538346 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:10:58.550475 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 20 19:10:58.578911 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:10:58.591438 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 20 19:10:58.638685 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jun 20 19:10:58.662855 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 20 19:10:58.681534 lvm[1390]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 19:10:58.682747 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jun 20 19:10:58.698460 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:10:58.710704 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:10:58.710824 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:10:58.719326 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:10:58.741784 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:10:58.747408 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:10:58.747542 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:10:58.750575 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 20 19:10:58.751202 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:10:58.754190 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:10:58.765800 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 20 19:10:58.804095 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jun 20 19:10:58.804859 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:10:58.817667 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 19:10:58.837755 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:10:58.871250 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 20 19:10:58.933163 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:10:58.976760 systemd-networkd[1367]: lo: Link UP Jun 20 19:10:58.977312 systemd-networkd[1367]: lo: Gained carrier Jun 20 19:10:58.980020 systemd-networkd[1367]: Enumeration completed Jun 20 19:10:58.980205 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:10:58.981083 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:10:58.981101 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:10:58.981859 systemd-networkd[1367]: eth0: Link UP Jun 20 19:10:58.981875 systemd-networkd[1367]: eth0: Gained carrier Jun 20 19:10:58.981901 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:10:58.996599 systemd-networkd[1367]: eth0: DHCPv4 address 10.128.0.67/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jun 20 19:10:59.000842 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:10:59.005434 systemd-resolved[1368]: Positive Trust Anchors: Jun 20 19:10:59.005452 systemd-resolved[1368]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:10:59.005534 systemd-resolved[1368]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:10:59.012386 systemd-resolved[1368]: Defaulting to hostname 'linux'. Jun 20 19:10:59.018842 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:10:59.030900 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:10:59.041019 systemd[1]: Reached target network.target - Network. Jun 20 19:10:59.049691 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:10:59.061710 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:10:59.071887 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:10:59.083762 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:10:59.095982 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:10:59.105868 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:10:59.117747 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:10:59.129764 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:10:59.129841 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:10:59.138721 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:10:59.149773 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:10:59.161557 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:10:59.173587 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:10:59.186015 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:10:59.197703 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:10:59.219717 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:10:59.230381 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:10:59.243184 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:10:59.254967 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:10:59.265893 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:10:59.275706 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:10:59.284860 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:10:59.284919 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:10:59.290647 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:10:59.309736 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:10:59.331892 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:10:59.349713 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:10:59.379796 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:10:59.389657 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:10:59.396820 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:10:59.398359 jq[1433]: false Jun 20 19:10:59.418268 systemd[1]: Started ntpd.service - Network Time Service. Jun 20 19:10:59.433070 extend-filesystems[1434]: Found loop4 Jun 20 19:10:59.457733 extend-filesystems[1434]: Found loop5 Jun 20 19:10:59.457733 extend-filesystems[1434]: Found loop6 Jun 20 19:10:59.457733 extend-filesystems[1434]: Found loop7 Jun 20 19:10:59.457733 extend-filesystems[1434]: Found sda Jun 20 19:10:59.457733 extend-filesystems[1434]: Found sda1 Jun 20 19:10:59.457733 extend-filesystems[1434]: Found sda2 Jun 20 19:10:59.457733 extend-filesystems[1434]: Found sda3 Jun 20 19:10:59.457733 extend-filesystems[1434]: Found usr Jun 20 19:10:59.457733 extend-filesystems[1434]: Found sda4 Jun 20 19:10:59.457733 extend-filesystems[1434]: Found sda6 Jun 20 19:10:59.457733 extend-filesystems[1434]: Found sda7 Jun 20 19:10:59.457733 extend-filesystems[1434]: Found sda9 Jun 20 19:10:59.457733 extend-filesystems[1434]: Checking size of /dev/sda9 Jun 20 19:10:59.645760 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jun 20 19:10:59.645828 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jun 20 19:10:59.645867 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1280) Jun 20 19:10:59.437777 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:10:59.520469 dbus-daemon[1432]: [system] SELinux support is enabled Jun 20 19:10:59.646586 extend-filesystems[1434]: Resized partition /dev/sda9 Jun 20 19:10:59.655892 coreos-metadata[1431]: Jun 20 19:10:59.472 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jun 20 19:10:59.655892 coreos-metadata[1431]: Jun 20 19:10:59.476 INFO Fetch successful Jun 20 19:10:59.655892 coreos-metadata[1431]: Jun 20 19:10:59.476 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jun 20 19:10:59.655892 coreos-metadata[1431]: Jun 20 19:10:59.477 INFO Fetch successful Jun 20 19:10:59.655892 coreos-metadata[1431]: Jun 20 19:10:59.477 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jun 20 19:10:59.655892 coreos-metadata[1431]: Jun 20 19:10:59.478 INFO Fetch successful Jun 20 19:10:59.655892 coreos-metadata[1431]: Jun 20 19:10:59.478 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jun 20 19:10:59.655892 coreos-metadata[1431]: Jun 20 19:10:59.479 INFO Fetch successful Jun 20 19:10:59.455809 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:10:59.532060 dbus-daemon[1432]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1367 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 20 19:10:59.656667 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Jun 20 19:10:59.656667 extend-filesystems[1452]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jun 20 19:10:59.656667 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 2 Jun 20 19:10:59.656667 extend-filesystems[1452]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 16:33:02 UTC 2025 (1): Starting Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: ---------------------------------------------------- Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: ntp-4 is maintained by Network Time Foundation, Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: corporation. Support and training for ntp-4 are Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: available at https://www.nwtime.org/support Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: ---------------------------------------------------- Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: proto: precision = 0.075 usec (-24) Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: basedate set to 2025-06-08 Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: gps base set to 2025-06-08 (week 2370) Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: Listen and drop on 0 v6wildcard [::]:123 Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: Listen normally on 2 lo 127.0.0.1:123 Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: Listen normally on 3 eth0 10.128.0.67:123 Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: Listen normally on 4 lo [::1]:123 Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: bind(21) AF_INET6 fe80::4001:aff:fe80:43%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:43%2#123 Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: failed to init interface for address fe80::4001:aff:fe80:43%2 Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: Listening on routing socket on fd #21 for interface updates Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 19:10:59.738762 ntpd[1439]: 20 Jun 19:10:59 ntpd[1439]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 19:10:59.517811 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:10:59.553444 ntpd[1439]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 16:33:02 UTC 2025 (1): Starting Jun 20 19:10:59.742431 extend-filesystems[1434]: Resized filesystem in /dev/sda9 Jun 20 19:10:59.543416 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:10:59.553476 ntpd[1439]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 20 19:10:59.564822 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jun 20 19:10:59.560586 ntpd[1439]: ---------------------------------------------------- Jun 20 19:10:59.566349 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:10:59.560614 ntpd[1439]: ntp-4 is maintained by Network Time Foundation, Jun 20 19:10:59.577737 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:10:59.560632 ntpd[1439]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 20 19:10:59.759931 update_engine[1459]: I20250620 19:10:59.700635 1459 main.cc:92] Flatcar Update Engine starting Jun 20 19:10:59.759931 update_engine[1459]: I20250620 19:10:59.703073 1459 update_check_scheduler.cc:74] Next update check in 6m11s Jun 20 19:10:59.603352 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:10:59.560660 ntpd[1439]: corporation. Support and training for ntp-4 are Jun 20 19:10:59.762921 jq[1463]: true Jun 20 19:10:59.618482 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:10:59.560675 ntpd[1439]: available at https://www.nwtime.org/support Jun 20 19:10:59.646185 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:10:59.560690 ntpd[1439]: ---------------------------------------------------- Jun 20 19:10:59.648594 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:10:59.563735 ntpd[1439]: proto: precision = 0.075 usec (-24) Jun 20 19:10:59.649175 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:10:59.564176 ntpd[1439]: basedate set to 2025-06-08 Jun 20 19:10:59.649769 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:10:59.564199 ntpd[1439]: gps base set to 2025-06-08 (week 2370) Jun 20 19:10:59.666453 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:10:59.588189 ntpd[1439]: Listen and drop on 0 v6wildcard [::]:123 Jun 20 19:10:59.666849 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:10:59.588263 ntpd[1439]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 20 19:10:59.683199 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:10:59.588584 ntpd[1439]: Listen normally on 2 lo 127.0.0.1:123 Jun 20 19:10:59.684588 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:10:59.588680 ntpd[1439]: Listen normally on 3 eth0 10.128.0.67:123 Jun 20 19:10:59.588765 ntpd[1439]: Listen normally on 4 lo [::1]:123 Jun 20 19:10:59.588848 ntpd[1439]: bind(21) AF_INET6 fe80::4001:aff:fe80:43%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 19:10:59.588879 ntpd[1439]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:43%2#123 Jun 20 19:10:59.588904 ntpd[1439]: failed to init interface for address fe80::4001:aff:fe80:43%2 Jun 20 19:10:59.588954 ntpd[1439]: Listening on routing socket on fd #21 for interface updates Jun 20 19:10:59.601628 ntpd[1439]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 19:10:59.601676 ntpd[1439]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 19:10:59.772552 jq[1469]: true Jun 20 19:10:59.789648 dbus-daemon[1432]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 20 19:10:59.807566 systemd-logind[1458]: Watching system buttons on /dev/input/event1 (Power Button) Jun 20 19:10:59.807636 systemd-logind[1458]: Watching system buttons on /dev/input/event2 (Sleep Button) Jun 20 19:10:59.807670 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:10:59.808937 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:10:59.811418 systemd-logind[1458]: New seat seat0. Jun 20 19:10:59.823948 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:10:59.868611 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:10:59.914172 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:10:59.917752 tar[1468]: linux-amd64/LICENSE Jun 20 19:10:59.920425 tar[1468]: linux-amd64/helm Jun 20 19:10:59.928783 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:10:59.939514 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:10:59.939858 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:10:59.940089 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:10:59.953972 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:10:59.969940 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 20 19:10:59.978874 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:10:59.979156 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:10:59.999903 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:11:00.019564 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:11:00.041930 systemd[1]: Starting sshkeys.service... Jun 20 19:11:00.100833 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:11:00.128265 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 19:11:00.151381 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 19:11:00.298695 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:11:00.304260 coreos-metadata[1509]: Jun 20 19:11:00.303 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jun 20 19:11:00.306943 coreos-metadata[1509]: Jun 20 19:11:00.306 INFO Fetch failed with 404: resource not found Jun 20 19:11:00.306943 coreos-metadata[1509]: Jun 20 19:11:00.306 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jun 20 19:11:00.307580 coreos-metadata[1509]: Jun 20 19:11:00.307 INFO Fetch successful Jun 20 19:11:00.307580 coreos-metadata[1509]: Jun 20 19:11:00.307 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jun 20 19:11:00.308328 coreos-metadata[1509]: Jun 20 19:11:00.308 INFO Fetch failed with 404: resource not found Jun 20 19:11:00.312095 coreos-metadata[1509]: Jun 20 19:11:00.308 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jun 20 19:11:00.315951 coreos-metadata[1509]: Jun 20 19:11:00.315 INFO Fetch failed with 404: resource not found Jun 20 19:11:00.315951 coreos-metadata[1509]: Jun 20 19:11:00.315 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jun 20 19:11:00.327320 coreos-metadata[1509]: Jun 20 19:11:00.323 INFO Fetch successful Jun 20 19:11:00.326120 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:11:00.332154 unknown[1509]: wrote ssh authorized keys file for user: core Jun 20 19:11:00.342908 systemd[1]: Started sshd@0-10.128.0.67:22-147.75.109.163:40684.service - OpenSSH per-connection server daemon (147.75.109.163:40684). Jun 20 19:11:00.409436 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:11:00.412251 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:11:00.420031 dbus-daemon[1432]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 20 19:11:00.424649 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 20 19:11:00.425884 dbus-daemon[1432]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1501 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 20 19:11:00.442535 update-ssh-keys[1526]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:11:00.445768 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:11:00.452341 systemd[1]: Starting polkit.service - Authorization Manager... Jun 20 19:11:00.471029 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:11:00.483185 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 19:11:00.504155 systemd[1]: Finished sshkeys.service. Jun 20 19:11:00.552736 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:11:00.560688 polkitd[1532]: Started polkitd version 121 Jun 20 19:11:00.561672 ntpd[1439]: bind(24) AF_INET6 fe80::4001:aff:fe80:43%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 19:11:00.562313 ntpd[1439]: 20 Jun 19:11:00 ntpd[1439]: bind(24) AF_INET6 fe80::4001:aff:fe80:43%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 19:11:00.562313 ntpd[1439]: 20 Jun 19:11:00 ntpd[1439]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:43%2#123 Jun 20 19:11:00.562313 ntpd[1439]: 20 Jun 19:11:00 ntpd[1439]: failed to init interface for address fe80::4001:aff:fe80:43%2 Jun 20 19:11:00.561721 ntpd[1439]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:43%2#123 Jun 20 19:11:00.561743 ntpd[1439]: failed to init interface for address fe80::4001:aff:fe80:43%2 Jun 20 19:11:00.575929 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:11:00.594923 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:11:00.603124 polkitd[1532]: Loading rules from directory /etc/polkit-1/rules.d Jun 20 19:11:00.603239 polkitd[1532]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 20 19:11:00.604972 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:11:00.610708 polkitd[1532]: Finished loading, compiling and executing 2 rules Jun 20 19:11:00.612720 dbus-daemon[1432]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 20 19:11:00.614455 systemd[1]: Started polkit.service - Authorization Manager. Jun 20 19:11:00.616895 polkitd[1532]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 20 19:11:00.661023 systemd-resolved[1368]: System hostname changed to 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal'. Jun 20 19:11:00.661773 systemd-hostnamed[1501]: Hostname set to (transient) Jun 20 19:11:00.677814 containerd[1476]: time="2025-06-20T19:11:00.677177799Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jun 20 19:11:00.737695 containerd[1476]: time="2025-06-20T19:11:00.737207333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:11:00.742349 containerd[1476]: time="2025-06-20T19:11:00.741384527Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.94-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:11:00.742349 containerd[1476]: time="2025-06-20T19:11:00.741436745Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 20 19:11:00.742349 containerd[1476]: time="2025-06-20T19:11:00.741462166Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 20 19:11:00.742349 containerd[1476]: time="2025-06-20T19:11:00.741734226Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 20 19:11:00.742349 containerd[1476]: time="2025-06-20T19:11:00.741767804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 20 19:11:00.742349 containerd[1476]: time="2025-06-20T19:11:00.741847584Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:11:00.742349 containerd[1476]: time="2025-06-20T19:11:00.741865952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:11:00.743278 containerd[1476]: time="2025-06-20T19:11:00.743241017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:11:00.743405 containerd[1476]: time="2025-06-20T19:11:00.743386465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 20 19:11:00.743508 containerd[1476]: time="2025-06-20T19:11:00.743470632Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:11:00.744550 containerd[1476]: time="2025-06-20T19:11:00.743595170Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 20 19:11:00.744550 containerd[1476]: time="2025-06-20T19:11:00.744248075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:11:00.744909 containerd[1476]: time="2025-06-20T19:11:00.744880983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:11:00.745254 containerd[1476]: time="2025-06-20T19:11:00.745221686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:11:00.745371 containerd[1476]: time="2025-06-20T19:11:00.745351094Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 20 19:11:00.746477 containerd[1476]: time="2025-06-20T19:11:00.746311979Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 20 19:11:00.746477 containerd[1476]: time="2025-06-20T19:11:00.746399568Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:11:00.754077 containerd[1476]: time="2025-06-20T19:11:00.753959152Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 20 19:11:00.754077 containerd[1476]: time="2025-06-20T19:11:00.754056784Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 20 19:11:00.755573 containerd[1476]: time="2025-06-20T19:11:00.754462723Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 20 19:11:00.755573 containerd[1476]: time="2025-06-20T19:11:00.754570501Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 20 19:11:00.755573 containerd[1476]: time="2025-06-20T19:11:00.754602868Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 20 19:11:00.755573 containerd[1476]: time="2025-06-20T19:11:00.754824584Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 20 19:11:00.755817 containerd[1476]: time="2025-06-20T19:11:00.755573206Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 20 19:11:00.755817 containerd[1476]: time="2025-06-20T19:11:00.755751382Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 20 19:11:00.755817 containerd[1476]: time="2025-06-20T19:11:00.755782815Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 20 19:11:00.755941 containerd[1476]: time="2025-06-20T19:11:00.755821991Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 20 19:11:00.755941 containerd[1476]: time="2025-06-20T19:11:00.755849261Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 20 19:11:00.755941 containerd[1476]: time="2025-06-20T19:11:00.755874173Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 20 19:11:00.755941 containerd[1476]: time="2025-06-20T19:11:00.755898448Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 20 19:11:00.755941 containerd[1476]: time="2025-06-20T19:11:00.755922071Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 20 19:11:00.756148 containerd[1476]: time="2025-06-20T19:11:00.755946546Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 20 19:11:00.756148 containerd[1476]: time="2025-06-20T19:11:00.755969948Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 20 19:11:00.756148 containerd[1476]: time="2025-06-20T19:11:00.755992297Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 20 19:11:00.756148 containerd[1476]: time="2025-06-20T19:11:00.756013585Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 20 19:11:00.756148 containerd[1476]: time="2025-06-20T19:11:00.756046251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.756148 containerd[1476]: time="2025-06-20T19:11:00.756070922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.756148 containerd[1476]: time="2025-06-20T19:11:00.756092385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.756148 containerd[1476]: time="2025-06-20T19:11:00.756116266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.756148 containerd[1476]: time="2025-06-20T19:11:00.756137091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.756566 containerd[1476]: time="2025-06-20T19:11:00.756159048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.756566 containerd[1476]: time="2025-06-20T19:11:00.756179756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.756566 containerd[1476]: time="2025-06-20T19:11:00.756201333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.756566 containerd[1476]: time="2025-06-20T19:11:00.756224060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.756566 containerd[1476]: time="2025-06-20T19:11:00.756260701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.756566 containerd[1476]: time="2025-06-20T19:11:00.756282174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.756566 containerd[1476]: time="2025-06-20T19:11:00.756302876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.756566 containerd[1476]: time="2025-06-20T19:11:00.756324589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.756566 containerd[1476]: time="2025-06-20T19:11:00.756348869Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 20 19:11:00.756566 containerd[1476]: time="2025-06-20T19:11:00.756383999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.756566 containerd[1476]: time="2025-06-20T19:11:00.756408009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.756566 containerd[1476]: time="2025-06-20T19:11:00.756429668Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 20 19:11:00.757044 containerd[1476]: time="2025-06-20T19:11:00.756955870Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 20 19:11:00.758173 containerd[1476]: time="2025-06-20T19:11:00.757127590Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 20 19:11:00.758173 containerd[1476]: time="2025-06-20T19:11:00.757161767Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 20 19:11:00.758173 containerd[1476]: time="2025-06-20T19:11:00.757186165Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 20 19:11:00.758173 containerd[1476]: time="2025-06-20T19:11:00.757203593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.758173 containerd[1476]: time="2025-06-20T19:11:00.757224685Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 20 19:11:00.758173 containerd[1476]: time="2025-06-20T19:11:00.757245252Z" level=info msg="NRI interface is disabled by configuration." Jun 20 19:11:00.758173 containerd[1476]: time="2025-06-20T19:11:00.757261555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 20 19:11:00.758566 containerd[1476]: time="2025-06-20T19:11:00.758278472Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 20 19:11:00.758566 containerd[1476]: time="2025-06-20T19:11:00.758363178Z" level=info msg="Connect containerd service" Jun 20 19:11:00.758566 containerd[1476]: time="2025-06-20T19:11:00.758427151Z" level=info msg="using legacy CRI server" Jun 20 19:11:00.758566 containerd[1476]: time="2025-06-20T19:11:00.758442007Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:11:00.758926 containerd[1476]: time="2025-06-20T19:11:00.758839169Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 20 19:11:00.761392 containerd[1476]: time="2025-06-20T19:11:00.760705575Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:11:00.761392 containerd[1476]: time="2025-06-20T19:11:00.761144221Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:11:00.761392 containerd[1476]: time="2025-06-20T19:11:00.761216705Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:11:00.761392 containerd[1476]: time="2025-06-20T19:11:00.761273932Z" level=info msg="Start subscribing containerd event" Jun 20 19:11:00.761392 containerd[1476]: time="2025-06-20T19:11:00.761319360Z" level=info msg="Start recovering state" Jun 20 19:11:00.761709 containerd[1476]: time="2025-06-20T19:11:00.761402354Z" level=info msg="Start event monitor" Jun 20 19:11:00.761709 containerd[1476]: time="2025-06-20T19:11:00.761426658Z" level=info msg="Start snapshots syncer" Jun 20 19:11:00.761709 containerd[1476]: time="2025-06-20T19:11:00.761441509Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:11:00.761709 containerd[1476]: time="2025-06-20T19:11:00.761452331Z" level=info msg="Start streaming server" Jun 20 19:11:00.762721 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:11:00.765806 containerd[1476]: time="2025-06-20T19:11:00.762900742Z" level=info msg="containerd successfully booted in 0.087957s" Jun 20 19:11:00.840579 sshd[1525]: Accepted publickey for core from 147.75.109.163 port 40684 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:11:00.844306 sshd-session[1525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:11:00.866119 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:11:00.884995 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:11:00.903752 systemd-logind[1458]: New session 1 of user core. Jun 20 19:11:00.930446 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:11:00.947882 systemd-networkd[1367]: eth0: Gained IPv6LL Jun 20 19:11:00.956984 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:11:00.966597 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:11:00.980080 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:11:01.003816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:11:01.007960 (systemd)[1553]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:11:01.022639 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:11:01.038689 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jun 20 19:11:01.068901 init.sh[1557]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jun 20 19:11:01.073418 systemd-logind[1458]: New session c1 of user core. Jun 20 19:11:01.075387 init.sh[1557]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jun 20 19:11:01.078180 init.sh[1557]: + /usr/bin/google_instance_setup Jun 20 19:11:01.106799 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:11:01.129158 tar[1468]: linux-amd64/README.md Jun 20 19:11:01.161964 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:11:01.410071 systemd[1553]: Queued start job for default target default.target. Jun 20 19:11:01.415338 systemd[1553]: Created slice app.slice - User Application Slice. Jun 20 19:11:01.415392 systemd[1553]: Reached target paths.target - Paths. Jun 20 19:11:01.415725 systemd[1553]: Reached target timers.target - Timers. Jun 20 19:11:01.419702 systemd[1553]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:11:01.447706 systemd[1553]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:11:01.447926 systemd[1553]: Reached target sockets.target - Sockets. Jun 20 19:11:01.448016 systemd[1553]: Reached target basic.target - Basic System. Jun 20 19:11:01.448093 systemd[1553]: Reached target default.target - Main User Target. Jun 20 19:11:01.448148 systemd[1553]: Startup finished in 351ms. Jun 20 19:11:01.449715 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:11:01.467612 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:11:01.718000 systemd[1]: Started sshd@1-10.128.0.67:22-147.75.109.163:40686.service - OpenSSH per-connection server daemon (147.75.109.163:40686). Jun 20 19:11:01.797472 instance-setup[1563]: INFO Running google_set_multiqueue. Jun 20 19:11:01.818207 instance-setup[1563]: INFO Set channels for eth0 to 2. Jun 20 19:11:01.823529 instance-setup[1563]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jun 20 19:11:01.825609 instance-setup[1563]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jun 20 19:11:01.826062 instance-setup[1563]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jun 20 19:11:01.828702 instance-setup[1563]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jun 20 19:11:01.828949 instance-setup[1563]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jun 20 19:11:01.830689 instance-setup[1563]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jun 20 19:11:01.832364 instance-setup[1563]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jun 20 19:11:01.836048 instance-setup[1563]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jun 20 19:11:01.845615 instance-setup[1563]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jun 20 19:11:01.850091 instance-setup[1563]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jun 20 19:11:01.852121 instance-setup[1563]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jun 20 19:11:01.852196 instance-setup[1563]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jun 20 19:11:01.878988 init.sh[1557]: + /usr/bin/google_metadata_script_runner --script-type startup Jun 20 19:11:02.049574 startup-script[1613]: INFO Starting startup scripts. Jun 20 19:11:02.055912 startup-script[1613]: INFO No startup scripts found in metadata. Jun 20 19:11:02.056170 startup-script[1613]: INFO Finished running startup scripts. Jun 20 19:11:02.072048 sshd[1583]: Accepted publickey for core from 147.75.109.163 port 40686 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:11:02.073564 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:11:02.083775 systemd-logind[1458]: New session 2 of user core. Jun 20 19:11:02.089795 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:11:02.098580 init.sh[1557]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jun 20 19:11:02.098580 init.sh[1557]: + daemon_pids=() Jun 20 19:11:02.098580 init.sh[1557]: + for d in accounts clock_skew network Jun 20 19:11:02.098580 init.sh[1557]: + daemon_pids+=($!) Jun 20 19:11:02.098580 init.sh[1557]: + for d in accounts clock_skew network Jun 20 19:11:02.098580 init.sh[1557]: + daemon_pids+=($!) Jun 20 19:11:02.098580 init.sh[1557]: + for d in accounts clock_skew network Jun 20 19:11:02.098580 init.sh[1557]: + daemon_pids+=($!) Jun 20 19:11:02.098580 init.sh[1557]: + NOTIFY_SOCKET=/run/systemd/notify Jun 20 19:11:02.098580 init.sh[1557]: + /usr/bin/systemd-notify --ready Jun 20 19:11:02.099539 init.sh[1616]: + /usr/bin/google_accounts_daemon Jun 20 19:11:02.105377 init.sh[1617]: + /usr/bin/google_clock_skew_daemon Jun 20 19:11:02.109535 init.sh[1618]: + /usr/bin/google_network_daemon Jun 20 19:11:02.130395 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jun 20 19:11:02.141912 init.sh[1557]: + wait -n 1616 1617 1618 Jun 20 19:11:02.298942 sshd[1620]: Connection closed by 147.75.109.163 port 40686 Jun 20 19:11:02.299574 sshd-session[1583]: pam_unix(sshd:session): session closed for user core Jun 20 19:11:02.309973 systemd[1]: sshd@1-10.128.0.67:22-147.75.109.163:40686.service: Deactivated successfully. Jun 20 19:11:02.317007 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 19:11:02.323449 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Jun 20 19:11:02.325910 systemd-logind[1458]: Removed session 2. Jun 20 19:11:02.368640 systemd[1]: Started sshd@2-10.128.0.67:22-147.75.109.163:40698.service - OpenSSH per-connection server daemon (147.75.109.163:40698). Jun 20 19:11:02.553876 google-networking[1618]: INFO Starting Google Networking daemon. Jun 20 19:11:02.590923 google-clock-skew[1617]: INFO Starting Google Clock Skew daemon. Jun 20 19:11:02.603733 google-clock-skew[1617]: INFO Clock drift token has changed: 0. Jun 20 19:11:02.624009 groupadd[1635]: group added to /etc/group: name=google-sudoers, GID=1000 Jun 20 19:11:02.631087 groupadd[1635]: group added to /etc/gshadow: name=google-sudoers Jun 20 19:11:02.684517 groupadd[1635]: new group: name=google-sudoers, GID=1000 Jun 20 19:11:02.718151 google-accounts[1616]: INFO Starting Google Accounts daemon. Jun 20 19:11:02.721986 sshd[1628]: Accepted publickey for core from 147.75.109.163 port 40698 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:11:02.726067 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:11:02.734856 google-accounts[1616]: WARNING OS Login not installed. Jun 20 19:11:02.738439 systemd-logind[1458]: New session 3 of user core. Jun 20 19:11:02.739263 google-accounts[1616]: INFO Creating a new user account for 0. Jun 20 19:11:02.745203 init.sh[1644]: useradd: invalid user name '0': use --badname to ignore Jun 20 19:11:02.745288 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:11:02.746132 google-accounts[1616]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jun 20 19:11:02.942609 sshd[1646]: Connection closed by 147.75.109.163 port 40698 Jun 20 19:11:02.944146 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Jun 20 19:11:02.951559 systemd[1]: sshd@2-10.128.0.67:22-147.75.109.163:40698.service: Deactivated successfully. Jun 20 19:11:02.955044 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 19:11:02.956068 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Jun 20 19:11:02.957855 systemd-logind[1458]: Removed session 3. Jun 20 19:11:03.175832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:11:03.189138 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:11:03.200200 systemd[1]: Startup finished in 1.126s (kernel) + 11.004s (initrd) + 10.484s (userspace) = 22.614s. Jun 20 19:11:03.210560 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:11:03.000466 systemd-resolved[1368]: Clock change detected. Flushing caches. Jun 20 19:11:03.024448 systemd-journald[1108]: Time jumped backwards, rotating. Jun 20 19:11:03.001742 google-clock-skew[1617]: INFO Synced system time with hardware clock. Jun 20 19:11:03.333644 ntpd[1439]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:43%2]:123 Jun 20 19:11:03.334347 ntpd[1439]: 20 Jun 19:11:03 ntpd[1439]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:43%2]:123 Jun 20 19:11:04.020372 kubelet[1656]: E0620 19:11:04.020278 1656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:11:04.023511 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:11:04.023795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:11:04.024338 systemd[1]: kubelet.service: Consumed 1.344s CPU time, 267.3M memory peak. Jun 20 19:11:12.779732 systemd[1]: Started sshd@3-10.128.0.67:22-147.75.109.163:41698.service - OpenSSH per-connection server daemon (147.75.109.163:41698). Jun 20 19:11:13.090817 sshd[1669]: Accepted publickey for core from 147.75.109.163 port 41698 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:11:13.093077 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:11:13.100609 systemd-logind[1458]: New session 4 of user core. Jun 20 19:11:13.108364 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:11:13.310866 sshd[1671]: Connection closed by 147.75.109.163 port 41698 Jun 20 19:11:13.311866 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Jun 20 19:11:13.316860 systemd[1]: sshd@3-10.128.0.67:22-147.75.109.163:41698.service: Deactivated successfully. Jun 20 19:11:13.319476 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:11:13.321562 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:11:13.323056 systemd-logind[1458]: Removed session 4. Jun 20 19:11:13.372376 systemd[1]: Started sshd@4-10.128.0.67:22-147.75.109.163:41712.service - OpenSSH per-connection server daemon (147.75.109.163:41712). Jun 20 19:11:13.676305 sshd[1677]: Accepted publickey for core from 147.75.109.163 port 41712 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:11:13.678070 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:11:13.684088 systemd-logind[1458]: New session 5 of user core. Jun 20 19:11:13.688157 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:11:13.887130 sshd[1679]: Connection closed by 147.75.109.163 port 41712 Jun 20 19:11:13.888022 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Jun 20 19:11:13.892573 systemd[1]: sshd@4-10.128.0.67:22-147.75.109.163:41712.service: Deactivated successfully. Jun 20 19:11:13.895055 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:11:13.897214 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:11:13.898682 systemd-logind[1458]: Removed session 5. Jun 20 19:11:13.946375 systemd[1]: Started sshd@5-10.128.0.67:22-147.75.109.163:41716.service - OpenSSH per-connection server daemon (147.75.109.163:41716). Jun 20 19:11:14.179148 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:11:14.188348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:11:14.240964 sshd[1685]: Accepted publickey for core from 147.75.109.163 port 41716 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:11:14.242937 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:11:14.253181 systemd-logind[1458]: New session 6 of user core. Jun 20 19:11:14.260281 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:11:14.459966 sshd[1690]: Connection closed by 147.75.109.163 port 41716 Jun 20 19:11:14.460218 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Jun 20 19:11:14.467779 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:11:14.469425 systemd[1]: sshd@5-10.128.0.67:22-147.75.109.163:41716.service: Deactivated successfully. Jun 20 19:11:14.474460 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:11:14.478839 systemd-logind[1458]: Removed session 6. Jun 20 19:11:14.526453 systemd[1]: Started sshd@6-10.128.0.67:22-147.75.109.163:41730.service - OpenSSH per-connection server daemon (147.75.109.163:41730). Jun 20 19:11:14.547214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:11:14.560851 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:11:14.618898 kubelet[1701]: E0620 19:11:14.618752 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:11:14.623395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:11:14.623641 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:11:14.624237 systemd[1]: kubelet.service: Consumed 205ms CPU time, 108.1M memory peak. Jun 20 19:11:14.831786 sshd[1698]: Accepted publickey for core from 147.75.109.163 port 41730 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:11:14.833789 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:11:14.840722 systemd-logind[1458]: New session 7 of user core. Jun 20 19:11:14.848252 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:11:15.032685 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:11:15.033250 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:11:15.058473 sudo[1711]: pam_unix(sudo:session): session closed for user root Jun 20 19:11:15.101798 sshd[1710]: Connection closed by 147.75.109.163 port 41730 Jun 20 19:11:15.103257 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Jun 20 19:11:15.107898 systemd[1]: sshd@6-10.128.0.67:22-147.75.109.163:41730.service: Deactivated successfully. Jun 20 19:11:15.110362 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:11:15.112547 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:11:15.114584 systemd-logind[1458]: Removed session 7. Jun 20 19:11:15.159379 systemd[1]: Started sshd@7-10.128.0.67:22-147.75.109.163:41742.service - OpenSSH per-connection server daemon (147.75.109.163:41742). Jun 20 19:11:15.457077 sshd[1717]: Accepted publickey for core from 147.75.109.163 port 41742 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:11:15.458893 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:11:15.465459 systemd-logind[1458]: New session 8 of user core. Jun 20 19:11:15.473256 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:11:15.638058 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:11:15.638597 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:11:15.643856 sudo[1721]: pam_unix(sudo:session): session closed for user root Jun 20 19:11:15.658044 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:11:15.658558 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:11:15.679558 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:11:15.718657 augenrules[1743]: No rules Jun 20 19:11:15.720538 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:11:15.720892 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:11:15.722617 sudo[1720]: pam_unix(sudo:session): session closed for user root Jun 20 19:11:15.765721 sshd[1719]: Connection closed by 147.75.109.163 port 41742 Jun 20 19:11:15.766619 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Jun 20 19:11:15.771250 systemd[1]: sshd@7-10.128.0.67:22-147.75.109.163:41742.service: Deactivated successfully. Jun 20 19:11:15.773597 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:11:15.775587 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:11:15.777162 systemd-logind[1458]: Removed session 8. Jun 20 19:11:15.822360 systemd[1]: Started sshd@8-10.128.0.67:22-147.75.109.163:41750.service - OpenSSH per-connection server daemon (147.75.109.163:41750). Jun 20 19:11:16.113947 sshd[1752]: Accepted publickey for core from 147.75.109.163 port 41750 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:11:16.115634 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:11:16.122732 systemd-logind[1458]: New session 9 of user core. Jun 20 19:11:16.133230 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:11:16.292847 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:11:16.293401 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:11:16.793338 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:11:16.805598 (dockerd)[1772]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:11:17.281137 dockerd[1772]: time="2025-06-20T19:11:17.280178903Z" level=info msg="Starting up" Jun 20 19:11:17.398078 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1946468607-merged.mount: Deactivated successfully. Jun 20 19:11:17.408988 systemd[1]: var-lib-docker-metacopy\x2dcheck3948802135-merged.mount: Deactivated successfully. Jun 20 19:11:17.429875 dockerd[1772]: time="2025-06-20T19:11:17.429539072Z" level=info msg="Loading containers: start." Jun 20 19:11:17.660206 kernel: Initializing XFRM netlink socket Jun 20 19:11:17.783701 systemd-networkd[1367]: docker0: Link UP Jun 20 19:11:17.822898 dockerd[1772]: time="2025-06-20T19:11:17.822832692Z" level=info msg="Loading containers: done." Jun 20 19:11:17.844646 dockerd[1772]: time="2025-06-20T19:11:17.844576310Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:11:17.844889 dockerd[1772]: time="2025-06-20T19:11:17.844744490Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jun 20 19:11:17.844987 dockerd[1772]: time="2025-06-20T19:11:17.844937375Z" level=info msg="Daemon has completed initialization" Jun 20 19:11:17.890424 dockerd[1772]: time="2025-06-20T19:11:17.889578088Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:11:17.890144 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:11:18.390999 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2683385709-merged.mount: Deactivated successfully. Jun 20 19:11:18.893603 containerd[1476]: time="2025-06-20T19:11:18.893523988Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 20 19:11:19.460472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1624256806.mount: Deactivated successfully. Jun 20 19:11:21.084720 containerd[1476]: time="2025-06-20T19:11:21.084637952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:21.086566 containerd[1476]: time="2025-06-20T19:11:21.086486589Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28805673" Jun 20 19:11:21.088285 containerd[1476]: time="2025-06-20T19:11:21.088212920Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:21.092108 containerd[1476]: time="2025-06-20T19:11:21.092029463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:21.093765 containerd[1476]: time="2025-06-20T19:11:21.093522212Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.199933575s" Jun 20 19:11:21.093765 containerd[1476]: time="2025-06-20T19:11:21.093577210Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 20 19:11:21.094512 containerd[1476]: time="2025-06-20T19:11:21.094464404Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 20 19:11:22.662536 containerd[1476]: time="2025-06-20T19:11:22.662458216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:22.664396 containerd[1476]: time="2025-06-20T19:11:22.664316529Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24785846" Jun 20 19:11:22.665684 containerd[1476]: time="2025-06-20T19:11:22.665609966Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:22.669874 containerd[1476]: time="2025-06-20T19:11:22.669800796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:22.671452 containerd[1476]: time="2025-06-20T19:11:22.671216253Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.576582362s" Jun 20 19:11:22.671452 containerd[1476]: time="2025-06-20T19:11:22.671328225Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 20 19:11:22.672165 containerd[1476]: time="2025-06-20T19:11:22.672136299Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 20 19:11:23.930312 containerd[1476]: time="2025-06-20T19:11:23.930227258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:23.932000 containerd[1476]: time="2025-06-20T19:11:23.931891584Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19178832" Jun 20 19:11:23.933631 containerd[1476]: time="2025-06-20T19:11:23.933559130Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:23.940959 containerd[1476]: time="2025-06-20T19:11:23.940280705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:23.943661 containerd[1476]: time="2025-06-20T19:11:23.943603939Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.271272612s" Jun 20 19:11:23.943661 containerd[1476]: time="2025-06-20T19:11:23.943663589Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 20 19:11:23.945000 containerd[1476]: time="2025-06-20T19:11:23.944959552Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 20 19:11:24.716254 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:11:24.728069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:11:25.031216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:11:25.043450 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:11:25.131165 kubelet[2033]: E0620 19:11:25.131084 2033 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:11:25.136219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:11:25.136671 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:11:25.137758 systemd[1]: kubelet.service: Consumed 247ms CPU time, 108.2M memory peak. Jun 20 19:11:25.404287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1885865200.mount: Deactivated successfully. Jun 20 19:11:26.064633 containerd[1476]: time="2025-06-20T19:11:26.064556484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:26.066216 containerd[1476]: time="2025-06-20T19:11:26.066139064Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30897258" Jun 20 19:11:26.068086 containerd[1476]: time="2025-06-20T19:11:26.068027266Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:26.071421 containerd[1476]: time="2025-06-20T19:11:26.071337886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:26.072926 containerd[1476]: time="2025-06-20T19:11:26.072412467Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.127401817s" Jun 20 19:11:26.072926 containerd[1476]: time="2025-06-20T19:11:26.072468427Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 20 19:11:26.073577 containerd[1476]: time="2025-06-20T19:11:26.073506855Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:11:26.508862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679100613.mount: Deactivated successfully. Jun 20 19:11:27.745583 containerd[1476]: time="2025-06-20T19:11:27.745497263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:27.747378 containerd[1476]: time="2025-06-20T19:11:27.747299750Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Jun 20 19:11:27.749206 containerd[1476]: time="2025-06-20T19:11:27.749131660Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:27.753160 containerd[1476]: time="2025-06-20T19:11:27.753088617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:27.755324 containerd[1476]: time="2025-06-20T19:11:27.754717162Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.681164727s" Jun 20 19:11:27.755324 containerd[1476]: time="2025-06-20T19:11:27.754766991Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:11:27.756064 containerd[1476]: time="2025-06-20T19:11:27.755769131Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:11:28.190065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1705678964.mount: Deactivated successfully. Jun 20 19:11:28.197777 containerd[1476]: time="2025-06-20T19:11:28.197666821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:28.199037 containerd[1476]: time="2025-06-20T19:11:28.198971979Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Jun 20 19:11:28.201937 containerd[1476]: time="2025-06-20T19:11:28.200549536Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:28.205070 containerd[1476]: time="2025-06-20T19:11:28.205022965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:28.206936 containerd[1476]: time="2025-06-20T19:11:28.206865342Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 451.052649ms" Jun 20 19:11:28.207127 containerd[1476]: time="2025-06-20T19:11:28.207096161Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:11:28.207864 containerd[1476]: time="2025-06-20T19:11:28.207788291Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 20 19:11:28.628364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount531786900.mount: Deactivated successfully. Jun 20 19:11:30.471036 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 20 19:11:31.057864 containerd[1476]: time="2025-06-20T19:11:31.057783764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:31.059751 containerd[1476]: time="2025-06-20T19:11:31.059679094Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57557924" Jun 20 19:11:31.061201 containerd[1476]: time="2025-06-20T19:11:31.061120417Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:31.065769 containerd[1476]: time="2025-06-20T19:11:31.065692003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:31.069158 containerd[1476]: time="2025-06-20T19:11:31.067664375Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.859829704s" Jun 20 19:11:31.069158 containerd[1476]: time="2025-06-20T19:11:31.067724846Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 20 19:11:34.462306 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:11:34.462654 systemd[1]: kubelet.service: Consumed 247ms CPU time, 108.2M memory peak. Jun 20 19:11:34.473447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:11:34.525458 systemd[1]: Reload requested from client PID 2184 ('systemctl') (unit session-9.scope)... Jun 20 19:11:34.525482 systemd[1]: Reloading... Jun 20 19:11:34.691013 zram_generator::config[2225]: No configuration found. Jun 20 19:11:34.894372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:11:35.072021 systemd[1]: Reloading finished in 545 ms. Jun 20 19:11:35.197882 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:11:35.198278 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:11:35.198740 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:11:35.198816 systemd[1]: kubelet.service: Consumed 144ms CPU time, 97.4M memory peak. Jun 20 19:11:35.208484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:11:36.694224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:11:36.698631 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:11:36.760877 kubelet[2279]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:11:36.761404 kubelet[2279]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:11:36.761404 kubelet[2279]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:11:36.761404 kubelet[2279]: I0620 19:11:36.761137 2279 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:11:37.294753 kubelet[2279]: I0620 19:11:37.294688 2279 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:11:37.294753 kubelet[2279]: I0620 19:11:37.294728 2279 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:11:37.295249 kubelet[2279]: I0620 19:11:37.295207 2279 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:11:37.344899 kubelet[2279]: E0620 19:11:37.344828 2279 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.67:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:11:37.351177 kubelet[2279]: I0620 19:11:37.350990 2279 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:11:37.364996 kubelet[2279]: E0620 19:11:37.364945 2279 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 19:11:37.364996 kubelet[2279]: I0620 19:11:37.364996 2279 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 19:11:37.372296 kubelet[2279]: I0620 19:11:37.371766 2279 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:11:37.373676 kubelet[2279]: I0620 19:11:37.373588 2279 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:11:37.374022 kubelet[2279]: I0620 19:11:37.373655 2279 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:11:37.374259 kubelet[2279]: I0620 19:11:37.374031 2279 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:11:37.374259 kubelet[2279]: I0620 19:11:37.374063 2279 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:11:37.374362 kubelet[2279]: I0620 19:11:37.374278 2279 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:11:37.382359 kubelet[2279]: I0620 19:11:37.382290 2279 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:11:37.382359 kubelet[2279]: I0620 19:11:37.382354 2279 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:11:37.382594 kubelet[2279]: I0620 19:11:37.382388 2279 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:11:37.382594 kubelet[2279]: I0620 19:11:37.382413 2279 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:11:37.389969 kubelet[2279]: W0620 19:11:37.388853 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jun 20 19:11:37.389969 kubelet[2279]: E0620 19:11:37.389359 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.67:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:11:37.389969 kubelet[2279]: W0620 19:11:37.389498 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jun 20 19:11:37.389969 kubelet[2279]: E0620 19:11:37.389558 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.67:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:11:37.389969 kubelet[2279]: I0620 19:11:37.389708 2279 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 19:11:37.390367 kubelet[2279]: I0620 19:11:37.390340 2279 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:11:37.391705 kubelet[2279]: W0620 19:11:37.391666 2279 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:11:37.396349 kubelet[2279]: I0620 19:11:37.396051 2279 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:11:37.396349 kubelet[2279]: I0620 19:11:37.396114 2279 server.go:1287] "Started kubelet" Jun 20 19:11:37.401801 kubelet[2279]: I0620 19:11:37.401400 2279 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:11:37.402791 kubelet[2279]: I0620 19:11:37.402731 2279 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:11:37.405877 kubelet[2279]: I0620 19:11:37.405756 2279 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:11:37.406668 kubelet[2279]: I0620 19:11:37.406194 2279 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:11:37.406668 kubelet[2279]: I0620 19:11:37.406419 2279 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:11:37.410186 kubelet[2279]: E0620 19:11:37.407680 2279 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal.184ad5ff3c8c9ae0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal,UID:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal,},FirstTimestamp:2025-06-20 19:11:37.396083424 +0000 UTC m=+0.690524707,LastTimestamp:2025-06-20 19:11:37.396083424 +0000 UTC m=+0.690524707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal,}" Jun 20 19:11:37.413066 kubelet[2279]: I0620 19:11:37.413033 2279 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:11:37.418555 kubelet[2279]: I0620 19:11:37.417024 2279 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:11:37.418555 kubelet[2279]: E0620 19:11:37.417307 2279 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" not found" Jun 20 19:11:37.418555 kubelet[2279]: I0620 19:11:37.417703 2279 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:11:37.418555 kubelet[2279]: I0620 19:11:37.417772 2279 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:11:37.418555 kubelet[2279]: W0620 19:11:37.418322 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jun 20 19:11:37.418555 kubelet[2279]: E0620 19:11:37.418391 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.67:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:11:37.418555 kubelet[2279]: E0620 19:11:37.418490 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.67:6443: connect: connection refused" interval="200ms" Jun 20 19:11:37.419775 kubelet[2279]: I0620 19:11:37.419748 2279 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:11:37.420089 kubelet[2279]: I0620 19:11:37.420062 2279 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:11:37.422265 kubelet[2279]: E0620 19:11:37.422241 2279 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:11:37.422539 kubelet[2279]: I0620 19:11:37.422518 2279 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:11:37.436547 kubelet[2279]: I0620 19:11:37.436486 2279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:11:37.438660 kubelet[2279]: I0620 19:11:37.438627 2279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:11:37.438944 kubelet[2279]: I0620 19:11:37.438809 2279 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:11:37.438944 kubelet[2279]: I0620 19:11:37.438860 2279 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:11:37.438944 kubelet[2279]: I0620 19:11:37.438877 2279 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:11:37.439515 kubelet[2279]: E0620 19:11:37.439160 2279 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:11:37.450487 kubelet[2279]: W0620 19:11:37.450420 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jun 20 19:11:37.450857 kubelet[2279]: E0620 19:11:37.450816 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.67:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:11:37.465703 kubelet[2279]: I0620 19:11:37.465647 2279 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:11:37.465928 kubelet[2279]: I0620 19:11:37.465760 2279 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:11:37.465928 kubelet[2279]: I0620 19:11:37.465787 2279 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:11:37.468513 kubelet[2279]: I0620 19:11:37.468459 2279 policy_none.go:49] "None policy: Start" Jun 20 19:11:37.468513 kubelet[2279]: I0620 19:11:37.468490 2279 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:11:37.468513 kubelet[2279]: I0620 19:11:37.468508 2279 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:11:37.477294 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:11:37.492019 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:11:37.504208 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:11:37.506956 kubelet[2279]: I0620 19:11:37.506721 2279 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:11:37.507971 kubelet[2279]: I0620 19:11:37.507309 2279 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:11:37.507971 kubelet[2279]: I0620 19:11:37.507333 2279 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:11:37.507971 kubelet[2279]: I0620 19:11:37.508074 2279 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:11:37.510515 kubelet[2279]: E0620 19:11:37.510472 2279 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:11:37.510639 kubelet[2279]: E0620 19:11:37.510542 2279 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" not found" Jun 20 19:11:37.564282 systemd[1]: Created slice kubepods-burstable-pod5389a1bb70ecc0ac690fa53a76a818f1.slice - libcontainer container kubepods-burstable-pod5389a1bb70ecc0ac690fa53a76a818f1.slice. Jun 20 19:11:37.577185 kubelet[2279]: E0620 19:11:37.577134 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:37.584089 systemd[1]: Created slice kubepods-burstable-pod61b7108f6a620ea02a05c52e92819a57.slice - libcontainer container kubepods-burstable-pod61b7108f6a620ea02a05c52e92819a57.slice. Jun 20 19:11:37.587652 kubelet[2279]: E0620 19:11:37.587430 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:37.590277 systemd[1]: Created slice kubepods-burstable-podeec5d70e6d9e1a1b7c4bea9d4917cd82.slice - libcontainer container kubepods-burstable-podeec5d70e6d9e1a1b7c4bea9d4917cd82.slice. Jun 20 19:11:37.592625 kubelet[2279]: E0620 19:11:37.592587 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:37.619814 kubelet[2279]: E0620 19:11:37.619711 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.67:6443: connect: connection refused" interval="400ms" Jun 20 19:11:37.624336 kubelet[2279]: I0620 19:11:37.624296 2279 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:37.624934 kubelet[2279]: E0620 19:11:37.624857 2279 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.67:6443/api/v1/nodes\": dial tcp 10.128.0.67:6443: connect: connection refused" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:37.718417 kubelet[2279]: I0620 19:11:37.718343 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61b7108f6a620ea02a05c52e92819a57-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"61b7108f6a620ea02a05c52e92819a57\") " pod="kube-system/kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:37.718417 kubelet[2279]: I0620 19:11:37.718431 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eec5d70e6d9e1a1b7c4bea9d4917cd82-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"eec5d70e6d9e1a1b7c4bea9d4917cd82\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:37.718845 kubelet[2279]: I0620 19:11:37.718468 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eec5d70e6d9e1a1b7c4bea9d4917cd82-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"eec5d70e6d9e1a1b7c4bea9d4917cd82\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:37.718845 kubelet[2279]: I0620 19:11:37.718500 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eec5d70e6d9e1a1b7c4bea9d4917cd82-ca-certs\") pod \"kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"eec5d70e6d9e1a1b7c4bea9d4917cd82\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:37.718845 kubelet[2279]: I0620 19:11:37.718530 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eec5d70e6d9e1a1b7c4bea9d4917cd82-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"eec5d70e6d9e1a1b7c4bea9d4917cd82\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:37.718845 kubelet[2279]: I0620 19:11:37.718566 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eec5d70e6d9e1a1b7c4bea9d4917cd82-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"eec5d70e6d9e1a1b7c4bea9d4917cd82\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:37.719023 kubelet[2279]: I0620 19:11:37.718597 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5389a1bb70ecc0ac690fa53a76a818f1-kubeconfig\") pod \"kube-scheduler-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"5389a1bb70ecc0ac690fa53a76a818f1\") " pod="kube-system/kube-scheduler-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:37.719023 kubelet[2279]: I0620 19:11:37.718622 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61b7108f6a620ea02a05c52e92819a57-ca-certs\") pod \"kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"61b7108f6a620ea02a05c52e92819a57\") " pod="kube-system/kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:37.719023 kubelet[2279]: I0620 19:11:37.718654 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61b7108f6a620ea02a05c52e92819a57-k8s-certs\") pod \"kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"61b7108f6a620ea02a05c52e92819a57\") " pod="kube-system/kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:37.835695 kubelet[2279]: I0620 19:11:37.835537 2279 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:37.836717 kubelet[2279]: E0620 19:11:37.836659 2279 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.67:6443/api/v1/nodes\": dial tcp 10.128.0.67:6443: connect: connection refused" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:37.879584 containerd[1476]: time="2025-06-20T19:11:37.879485636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal,Uid:5389a1bb70ecc0ac690fa53a76a818f1,Namespace:kube-system,Attempt:0,}" Jun 20 19:11:37.889905 containerd[1476]: time="2025-06-20T19:11:37.889848184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal,Uid:61b7108f6a620ea02a05c52e92819a57,Namespace:kube-system,Attempt:0,}" Jun 20 19:11:37.894949 containerd[1476]: time="2025-06-20T19:11:37.893975935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal,Uid:eec5d70e6d9e1a1b7c4bea9d4917cd82,Namespace:kube-system,Attempt:0,}" Jun 20 19:11:38.021240 kubelet[2279]: E0620 19:11:38.021185 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.67:6443: connect: connection refused" interval="800ms" Jun 20 19:11:38.152733 kubelet[2279]: E0620 19:11:38.152479 2279 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal.184ad5ff3c8c9ae0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal,UID:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal,},FirstTimestamp:2025-06-20 19:11:37.396083424 +0000 UTC m=+0.690524707,LastTimestamp:2025-06-20 19:11:37.396083424 +0000 UTC m=+0.690524707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal,}" Jun 20 19:11:38.244823 kubelet[2279]: I0620 19:11:38.244778 2279 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:38.245269 kubelet[2279]: E0620 19:11:38.245215 2279 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.67:6443/api/v1/nodes\": dial tcp 10.128.0.67:6443: connect: connection refused" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:38.278470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3344388516.mount: Deactivated successfully. Jun 20 19:11:38.292957 containerd[1476]: time="2025-06-20T19:11:38.292863114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:11:38.297500 containerd[1476]: time="2025-06-20T19:11:38.297281404Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jun 20 19:11:38.302764 containerd[1476]: time="2025-06-20T19:11:38.302634529Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:11:38.304566 containerd[1476]: time="2025-06-20T19:11:38.304472801Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:11:38.306170 containerd[1476]: time="2025-06-20T19:11:38.306092539Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 19:11:38.310084 containerd[1476]: time="2025-06-20T19:11:38.309974944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:11:38.311549 containerd[1476]: time="2025-06-20T19:11:38.311485080Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 431.837991ms" Jun 20 19:11:38.312789 containerd[1476]: time="2025-06-20T19:11:38.312562232Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:11:38.312789 containerd[1476]: time="2025-06-20T19:11:38.312632969Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 19:11:38.318571 containerd[1476]: time="2025-06-20T19:11:38.318504909Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 424.407613ms" Jun 20 19:11:38.324586 containerd[1476]: time="2025-06-20T19:11:38.324520027Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 434.519011ms" Jun 20 19:11:38.572229 containerd[1476]: time="2025-06-20T19:11:38.571667701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:38.572229 containerd[1476]: time="2025-06-20T19:11:38.571773381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:38.572229 containerd[1476]: time="2025-06-20T19:11:38.571801563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:38.572229 containerd[1476]: time="2025-06-20T19:11:38.571969231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:38.572876 containerd[1476]: time="2025-06-20T19:11:38.567599702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:38.572876 containerd[1476]: time="2025-06-20T19:11:38.572257446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:38.572876 containerd[1476]: time="2025-06-20T19:11:38.572307888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:38.572876 containerd[1476]: time="2025-06-20T19:11:38.572446207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:38.574585 containerd[1476]: time="2025-06-20T19:11:38.574289886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:38.574585 containerd[1476]: time="2025-06-20T19:11:38.574368155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:38.574585 containerd[1476]: time="2025-06-20T19:11:38.574399482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:38.575007 containerd[1476]: time="2025-06-20T19:11:38.574541669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:38.620184 systemd[1]: Started cri-containerd-fae6ae7ba1e691edb4be104db7ef5dc35d565e22ae5d74d290be520b51551175.scope - libcontainer container fae6ae7ba1e691edb4be104db7ef5dc35d565e22ae5d74d290be520b51551175. Jun 20 19:11:38.628631 systemd[1]: Started cri-containerd-3015db6562dcb6a0fea4f4bf68c2dfc173b6c1a2c591b72caf830934d7971a44.scope - libcontainer container 3015db6562dcb6a0fea4f4bf68c2dfc173b6c1a2c591b72caf830934d7971a44. Jun 20 19:11:38.632086 systemd[1]: Started cri-containerd-5effadebecdee339fe62c16fa37dffa3434b9a89ecb011fe5f57508b63e898e6.scope - libcontainer container 5effadebecdee339fe62c16fa37dffa3434b9a89ecb011fe5f57508b63e898e6. Jun 20 19:11:38.674195 kubelet[2279]: W0620 19:11:38.673881 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jun 20 19:11:38.674195 kubelet[2279]: E0620 19:11:38.674027 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.67:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:11:38.723663 containerd[1476]: time="2025-06-20T19:11:38.723506171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal,Uid:5389a1bb70ecc0ac690fa53a76a818f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3015db6562dcb6a0fea4f4bf68c2dfc173b6c1a2c591b72caf830934d7971a44\"" Jun 20 19:11:38.733199 kubelet[2279]: E0620 19:11:38.732353 2279 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-21291" Jun 20 19:11:38.738306 containerd[1476]: time="2025-06-20T19:11:38.738253832Z" level=info msg="CreateContainer within sandbox \"3015db6562dcb6a0fea4f4bf68c2dfc173b6c1a2c591b72caf830934d7971a44\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:11:38.739468 kubelet[2279]: W0620 19:11:38.739227 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jun 20 19:11:38.739814 kubelet[2279]: E0620 19:11:38.739563 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.67:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:11:38.763235 containerd[1476]: time="2025-06-20T19:11:38.763067142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal,Uid:61b7108f6a620ea02a05c52e92819a57,Namespace:kube-system,Attempt:0,} returns sandbox id \"5effadebecdee339fe62c16fa37dffa3434b9a89ecb011fe5f57508b63e898e6\"" Jun 20 19:11:38.767643 kubelet[2279]: E0620 19:11:38.767006 2279 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-21291" Jun 20 19:11:38.770555 containerd[1476]: time="2025-06-20T19:11:38.770494568Z" level=info msg="CreateContainer within sandbox \"5effadebecdee339fe62c16fa37dffa3434b9a89ecb011fe5f57508b63e898e6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:11:38.785002 containerd[1476]: time="2025-06-20T19:11:38.784939644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal,Uid:eec5d70e6d9e1a1b7c4bea9d4917cd82,Namespace:kube-system,Attempt:0,} returns sandbox id \"fae6ae7ba1e691edb4be104db7ef5dc35d565e22ae5d74d290be520b51551175\"" Jun 20 19:11:38.788109 kubelet[2279]: E0620 19:11:38.787712 2279 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flat" Jun 20 19:11:38.790135 containerd[1476]: time="2025-06-20T19:11:38.790078920Z" level=info msg="CreateContainer within sandbox \"fae6ae7ba1e691edb4be104db7ef5dc35d565e22ae5d74d290be520b51551175\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:11:38.793955 containerd[1476]: time="2025-06-20T19:11:38.793726288Z" level=info msg="CreateContainer within sandbox \"3015db6562dcb6a0fea4f4bf68c2dfc173b6c1a2c591b72caf830934d7971a44\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1cd7cd4d62b0b6dba6b873e8c7a2cd3b5b9083e2c7693e6fada2c8360589efba\"" Jun 20 19:11:38.794816 containerd[1476]: time="2025-06-20T19:11:38.794769741Z" level=info msg="StartContainer for \"1cd7cd4d62b0b6dba6b873e8c7a2cd3b5b9083e2c7693e6fada2c8360589efba\"" Jun 20 19:11:38.798820 containerd[1476]: time="2025-06-20T19:11:38.798618184Z" level=info msg="CreateContainer within sandbox \"5effadebecdee339fe62c16fa37dffa3434b9a89ecb011fe5f57508b63e898e6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"faa5bfa01f2cd5dd2e8bc181c1607c24ac4ab6e6cb1ac90b51ba2ee7e75d5cd7\"" Jun 20 19:11:38.800847 containerd[1476]: time="2025-06-20T19:11:38.800722110Z" level=info msg="StartContainer for \"faa5bfa01f2cd5dd2e8bc181c1607c24ac4ab6e6cb1ac90b51ba2ee7e75d5cd7\"" Jun 20 19:11:38.816479 containerd[1476]: time="2025-06-20T19:11:38.816330106Z" level=info msg="CreateContainer within sandbox \"fae6ae7ba1e691edb4be104db7ef5dc35d565e22ae5d74d290be520b51551175\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d6d74baad6366d54112b9d6a57eb09c1e0e1b976dbfb4a9689bafe81eeb89352\"" Jun 20 19:11:38.818152 containerd[1476]: time="2025-06-20T19:11:38.817431003Z" level=info msg="StartContainer for \"d6d74baad6366d54112b9d6a57eb09c1e0e1b976dbfb4a9689bafe81eeb89352\"" Jun 20 19:11:38.822052 kubelet[2279]: E0620 19:11:38.821891 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.67:6443: connect: connection refused" interval="1.6s" Jun 20 19:11:38.852561 systemd[1]: Started cri-containerd-1cd7cd4d62b0b6dba6b873e8c7a2cd3b5b9083e2c7693e6fada2c8360589efba.scope - libcontainer container 1cd7cd4d62b0b6dba6b873e8c7a2cd3b5b9083e2c7693e6fada2c8360589efba. Jun 20 19:11:38.871243 systemd[1]: Started cri-containerd-faa5bfa01f2cd5dd2e8bc181c1607c24ac4ab6e6cb1ac90b51ba2ee7e75d5cd7.scope - libcontainer container faa5bfa01f2cd5dd2e8bc181c1607c24ac4ab6e6cb1ac90b51ba2ee7e75d5cd7. Jun 20 19:11:38.919591 systemd[1]: Started cri-containerd-d6d74baad6366d54112b9d6a57eb09c1e0e1b976dbfb4a9689bafe81eeb89352.scope - libcontainer container d6d74baad6366d54112b9d6a57eb09c1e0e1b976dbfb4a9689bafe81eeb89352. Jun 20 19:11:38.939795 kubelet[2279]: W0620 19:11:38.939647 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jun 20 19:11:38.939795 kubelet[2279]: E0620 19:11:38.939751 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.67:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:11:38.984614 containerd[1476]: time="2025-06-20T19:11:38.982307533Z" level=info msg="StartContainer for \"faa5bfa01f2cd5dd2e8bc181c1607c24ac4ab6e6cb1ac90b51ba2ee7e75d5cd7\" returns successfully" Jun 20 19:11:38.999518 kubelet[2279]: W0620 19:11:38.998788 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jun 20 19:11:38.999518 kubelet[2279]: E0620 19:11:38.999464 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.67:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:11:39.007046 containerd[1476]: time="2025-06-20T19:11:39.005492707Z" level=info msg="StartContainer for \"1cd7cd4d62b0b6dba6b873e8c7a2cd3b5b9083e2c7693e6fada2c8360589efba\" returns successfully" Jun 20 19:11:39.052181 containerd[1476]: time="2025-06-20T19:11:39.052021403Z" level=info msg="StartContainer for \"d6d74baad6366d54112b9d6a57eb09c1e0e1b976dbfb4a9689bafe81eeb89352\" returns successfully" Jun 20 19:11:39.054250 kubelet[2279]: I0620 19:11:39.053613 2279 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:39.054250 kubelet[2279]: E0620 19:11:39.054083 2279 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.67:6443/api/v1/nodes\": dial tcp 10.128.0.67:6443: connect: connection refused" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:39.464344 kubelet[2279]: E0620 19:11:39.464299 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:39.464825 kubelet[2279]: E0620 19:11:39.464793 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:39.472014 kubelet[2279]: E0620 19:11:39.471685 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:40.485570 kubelet[2279]: E0620 19:11:40.485284 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:40.485570 kubelet[2279]: E0620 19:11:40.485422 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:40.660027 kubelet[2279]: I0620 19:11:40.659991 2279 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:41.489577 kubelet[2279]: E0620 19:11:41.489335 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:41.808949 kubelet[2279]: E0620 19:11:41.808885 2279 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" not found" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:41.854879 kubelet[2279]: I0620 19:11:41.854806 2279 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:41.918188 kubelet[2279]: I0620 19:11:41.918128 2279 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:41.936347 kubelet[2279]: E0620 19:11:41.936268 2279 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:41.936347 kubelet[2279]: I0620 19:11:41.936342 2279 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:41.947015 kubelet[2279]: E0620 19:11:41.945784 2279 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:41.947215 kubelet[2279]: I0620 19:11:41.947047 2279 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:41.953459 kubelet[2279]: E0620 19:11:41.953413 2279 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:42.389805 kubelet[2279]: I0620 19:11:42.389748 2279 apiserver.go:52] "Watching apiserver" Jun 20 19:11:42.418474 kubelet[2279]: I0620 19:11:42.418421 2279 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:11:44.194353 systemd[1]: Reload requested from client PID 2553 ('systemctl') (unit session-9.scope)... Jun 20 19:11:44.194377 systemd[1]: Reloading... Jun 20 19:11:44.340018 zram_generator::config[2595]: No configuration found. Jun 20 19:11:44.364079 update_engine[1459]: I20250620 19:11:44.364006 1459 update_attempter.cc:509] Updating boot flags... Jun 20 19:11:44.464187 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2638) Jun 20 19:11:44.485495 kubelet[2279]: I0620 19:11:44.485431 2279 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:44.500401 kubelet[2279]: W0620 19:11:44.498742 2279 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jun 20 19:11:44.659945 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2640) Jun 20 19:11:44.713776 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:11:44.844948 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2640) Jun 20 19:11:45.053583 systemd[1]: Reloading finished in 858 ms. Jun 20 19:11:45.222889 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:11:45.245321 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:11:45.245603 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:11:45.245677 systemd[1]: kubelet.service: Consumed 1.228s CPU time, 133M memory peak. Jun 20 19:11:45.258564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:11:45.608201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:11:45.608983 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:11:45.689434 kubelet[2667]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:11:45.689434 kubelet[2667]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:11:45.689434 kubelet[2667]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:11:45.690167 kubelet[2667]: I0620 19:11:45.689550 2667 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:11:45.702006 kubelet[2667]: I0620 19:11:45.701958 2667 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:11:45.702006 kubelet[2667]: I0620 19:11:45.701995 2667 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:11:45.703051 kubelet[2667]: I0620 19:11:45.702418 2667 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:11:45.708767 kubelet[2667]: I0620 19:11:45.708687 2667 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:11:45.716356 kubelet[2667]: I0620 19:11:45.716314 2667 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:11:45.725190 kubelet[2667]: E0620 19:11:45.725139 2667 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 19:11:45.725190 kubelet[2667]: I0620 19:11:45.725188 2667 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 19:11:45.730535 kubelet[2667]: I0620 19:11:45.730437 2667 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:11:45.731234 kubelet[2667]: I0620 19:11:45.731180 2667 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:11:45.731590 kubelet[2667]: I0620 19:11:45.731233 2667 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:11:45.731773 kubelet[2667]: I0620 19:11:45.731594 2667 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:11:45.731773 kubelet[2667]: I0620 19:11:45.731614 2667 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:11:45.731773 kubelet[2667]: I0620 19:11:45.731690 2667 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:11:45.733955 kubelet[2667]: I0620 19:11:45.732297 2667 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:11:45.736047 kubelet[2667]: I0620 19:11:45.735974 2667 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:11:45.736521 kubelet[2667]: I0620 19:11:45.736116 2667 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:11:45.736521 kubelet[2667]: I0620 19:11:45.736141 2667 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:11:45.739939 kubelet[2667]: I0620 19:11:45.738161 2667 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 19:11:45.741933 kubelet[2667]: I0620 19:11:45.741338 2667 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:11:45.747408 kubelet[2667]: I0620 19:11:45.747369 2667 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:11:45.747579 kubelet[2667]: I0620 19:11:45.747429 2667 server.go:1287] "Started kubelet" Jun 20 19:11:45.759288 kubelet[2667]: I0620 19:11:45.759081 2667 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:11:45.768506 sudo[2682]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 19:11:45.769116 sudo[2682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 19:11:45.784876 kubelet[2667]: I0620 19:11:45.784803 2667 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:11:45.787064 kubelet[2667]: I0620 19:11:45.786980 2667 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:11:45.787940 kubelet[2667]: I0620 19:11:45.787407 2667 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:11:45.787940 kubelet[2667]: I0620 19:11:45.787807 2667 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:11:45.789730 kubelet[2667]: I0620 19:11:45.789023 2667 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:11:45.789730 kubelet[2667]: E0620 19:11:45.789397 2667 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" not found" Jun 20 19:11:45.795657 kubelet[2667]: I0620 19:11:45.794236 2667 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:11:45.795657 kubelet[2667]: I0620 19:11:45.794435 2667 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:11:45.797281 kubelet[2667]: I0620 19:11:45.797241 2667 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:11:45.818772 kubelet[2667]: I0620 19:11:45.816569 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:11:45.821097 kubelet[2667]: I0620 19:11:45.820947 2667 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:11:45.821097 kubelet[2667]: I0620 19:11:45.821082 2667 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:11:45.827439 kubelet[2667]: I0620 19:11:45.827398 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:11:45.827439 kubelet[2667]: I0620 19:11:45.827450 2667 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:11:45.827650 kubelet[2667]: I0620 19:11:45.827492 2667 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:11:45.827650 kubelet[2667]: I0620 19:11:45.827508 2667 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:11:45.827650 kubelet[2667]: E0620 19:11:45.827579 2667 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:11:45.850366 kubelet[2667]: I0620 19:11:45.847329 2667 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:11:45.928411 kubelet[2667]: E0620 19:11:45.927854 2667 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:11:45.964346 kubelet[2667]: I0620 19:11:45.964315 2667 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:11:45.965448 kubelet[2667]: I0620 19:11:45.964670 2667 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:11:45.965448 kubelet[2667]: I0620 19:11:45.964703 2667 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:11:45.965448 kubelet[2667]: I0620 19:11:45.965057 2667 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:11:45.965448 kubelet[2667]: I0620 19:11:45.965077 2667 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:11:45.965448 kubelet[2667]: I0620 19:11:45.965109 2667 policy_none.go:49] "None policy: Start" Jun 20 19:11:45.965448 kubelet[2667]: I0620 19:11:45.965124 2667 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:11:45.965448 kubelet[2667]: I0620 19:11:45.965142 2667 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:11:45.965448 kubelet[2667]: I0620 19:11:45.965324 2667 state_mem.go:75] "Updated machine memory state" Jun 20 19:11:45.974200 kubelet[2667]: I0620 19:11:45.974169 2667 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:11:45.976134 kubelet[2667]: I0620 19:11:45.976109 2667 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:11:45.977149 kubelet[2667]: I0620 19:11:45.976503 2667 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:11:45.979969 kubelet[2667]: I0620 19:11:45.976908 2667 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:11:45.984722 kubelet[2667]: E0620 19:11:45.984629 2667 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:11:46.111384 kubelet[2667]: I0620 19:11:46.110976 2667 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.123705 kubelet[2667]: I0620 19:11:46.123297 2667 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.123705 kubelet[2667]: I0620 19:11:46.123403 2667 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.131829 kubelet[2667]: I0620 19:11:46.128552 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.131829 kubelet[2667]: I0620 19:11:46.129092 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.131829 kubelet[2667]: I0620 19:11:46.129532 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.140111 kubelet[2667]: W0620 19:11:46.140013 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jun 20 19:11:46.148169 kubelet[2667]: W0620 19:11:46.147986 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jun 20 19:11:46.148169 kubelet[2667]: W0620 19:11:46.148050 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jun 20 19:11:46.148169 kubelet[2667]: E0620 19:11:46.148069 2667 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.197680 kubelet[2667]: I0620 19:11:46.197148 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61b7108f6a620ea02a05c52e92819a57-ca-certs\") pod \"kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"61b7108f6a620ea02a05c52e92819a57\") " pod="kube-system/kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.197680 kubelet[2667]: I0620 19:11:46.197212 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eec5d70e6d9e1a1b7c4bea9d4917cd82-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"eec5d70e6d9e1a1b7c4bea9d4917cd82\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.197680 kubelet[2667]: I0620 19:11:46.197250 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eec5d70e6d9e1a1b7c4bea9d4917cd82-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"eec5d70e6d9e1a1b7c4bea9d4917cd82\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.197680 kubelet[2667]: I0620 19:11:46.197280 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eec5d70e6d9e1a1b7c4bea9d4917cd82-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"eec5d70e6d9e1a1b7c4bea9d4917cd82\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.198043 kubelet[2667]: I0620 19:11:46.197315 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eec5d70e6d9e1a1b7c4bea9d4917cd82-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"eec5d70e6d9e1a1b7c4bea9d4917cd82\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.198043 kubelet[2667]: I0620 19:11:46.197349 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5389a1bb70ecc0ac690fa53a76a818f1-kubeconfig\") pod \"kube-scheduler-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"5389a1bb70ecc0ac690fa53a76a818f1\") " pod="kube-system/kube-scheduler-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.198043 kubelet[2667]: I0620 19:11:46.197378 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61b7108f6a620ea02a05c52e92819a57-k8s-certs\") pod \"kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"61b7108f6a620ea02a05c52e92819a57\") " pod="kube-system/kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.198043 kubelet[2667]: I0620 19:11:46.197413 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61b7108f6a620ea02a05c52e92819a57-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"61b7108f6a620ea02a05c52e92819a57\") " pod="kube-system/kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.198261 kubelet[2667]: I0620 19:11:46.197443 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eec5d70e6d9e1a1b7c4bea9d4917cd82-ca-certs\") pod \"kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" (UID: \"eec5d70e6d9e1a1b7c4bea9d4917cd82\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.613610 sudo[2682]: pam_unix(sudo:session): session closed for user root Jun 20 19:11:46.749319 kubelet[2667]: I0620 19:11:46.749259 2667 apiserver.go:52] "Watching apiserver" Jun 20 19:11:46.794746 kubelet[2667]: I0620 19:11:46.794690 2667 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:11:46.905885 kubelet[2667]: I0620 19:11:46.905751 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.920670 kubelet[2667]: W0620 19:11:46.920628 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jun 20 19:11:46.920866 kubelet[2667]: E0620 19:11:46.920710 2667 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" Jun 20 19:11:46.965201 kubelet[2667]: I0620 19:11:46.965116 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" podStartSLOduration=0.965091154 podStartE2EDuration="965.091154ms" podCreationTimestamp="2025-06-20 19:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:11:46.951046127 +0000 UTC m=+1.335058876" watchObservedRunningTime="2025-06-20 19:11:46.965091154 +0000 UTC m=+1.349103900" Jun 20 19:11:46.982945 kubelet[2667]: I0620 19:11:46.981426 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" podStartSLOduration=0.981405107 podStartE2EDuration="981.405107ms" podCreationTimestamp="2025-06-20 19:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:11:46.966045332 +0000 UTC m=+1.350058062" watchObservedRunningTime="2025-06-20 19:11:46.981405107 +0000 UTC m=+1.365417850" Jun 20 19:11:47.000428 kubelet[2667]: I0620 19:11:47.000339 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" podStartSLOduration=3.000317321 podStartE2EDuration="3.000317321s" podCreationTimestamp="2025-06-20 19:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:11:46.982644667 +0000 UTC m=+1.366657423" watchObservedRunningTime="2025-06-20 19:11:47.000317321 +0000 UTC m=+1.384330067" Jun 20 19:11:48.849070 sudo[1755]: pam_unix(sudo:session): session closed for user root Jun 20 19:11:48.893399 sshd[1754]: Connection closed by 147.75.109.163 port 41750 Jun 20 19:11:48.894226 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Jun 20 19:11:48.899646 systemd[1]: sshd@8-10.128.0.67:22-147.75.109.163:41750.service: Deactivated successfully. Jun 20 19:11:48.902437 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:11:48.902742 systemd[1]: session-9.scope: Consumed 6.652s CPU time, 264.7M memory peak. Jun 20 19:11:48.905556 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:11:48.907655 systemd-logind[1458]: Removed session 9. Jun 20 19:11:49.214246 kubelet[2667]: I0620 19:11:49.212880 2667 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:11:49.214246 kubelet[2667]: I0620 19:11:49.213995 2667 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:11:49.215196 containerd[1476]: time="2025-06-20T19:11:49.213412550Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:11:50.115924 systemd[1]: Created slice kubepods-besteffort-pod79781b3d_fc6c_4f35_a311_50786f55b505.slice - libcontainer container kubepods-besteffort-pod79781b3d_fc6c_4f35_a311_50786f55b505.slice. Jun 20 19:11:50.122653 kubelet[2667]: I0620 19:11:50.122602 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79781b3d-fc6c-4f35-a311-50786f55b505-lib-modules\") pod \"kube-proxy-p2gt7\" (UID: \"79781b3d-fc6c-4f35-a311-50786f55b505\") " pod="kube-system/kube-proxy-p2gt7" Jun 20 19:11:50.122834 kubelet[2667]: I0620 19:11:50.122669 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79781b3d-fc6c-4f35-a311-50786f55b505-xtables-lock\") pod \"kube-proxy-p2gt7\" (UID: \"79781b3d-fc6c-4f35-a311-50786f55b505\") " pod="kube-system/kube-proxy-p2gt7" Jun 20 19:11:50.122834 kubelet[2667]: I0620 19:11:50.122704 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/79781b3d-fc6c-4f35-a311-50786f55b505-kube-proxy\") pod \"kube-proxy-p2gt7\" (UID: \"79781b3d-fc6c-4f35-a311-50786f55b505\") " pod="kube-system/kube-proxy-p2gt7" Jun 20 19:11:50.122834 kubelet[2667]: I0620 19:11:50.122733 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jdlm\" (UniqueName: \"kubernetes.io/projected/79781b3d-fc6c-4f35-a311-50786f55b505-kube-api-access-5jdlm\") pod \"kube-proxy-p2gt7\" (UID: \"79781b3d-fc6c-4f35-a311-50786f55b505\") " pod="kube-system/kube-proxy-p2gt7" Jun 20 19:11:50.142608 systemd[1]: Created slice kubepods-burstable-pod47471ebf_3d99_4372_9b55_4baeba3f8df7.slice - libcontainer container kubepods-burstable-pod47471ebf_3d99_4372_9b55_4baeba3f8df7.slice. Jun 20 19:11:50.153200 kubelet[2667]: W0620 19:11:50.153152 2667 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object Jun 20 19:11:50.153388 kubelet[2667]: E0620 19:11:50.153218 2667 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object" logger="UnhandledError" Jun 20 19:11:50.153591 kubelet[2667]: W0620 19:11:50.153458 2667 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object Jun 20 19:11:50.153591 kubelet[2667]: E0620 19:11:50.153493 2667 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object" logger="UnhandledError" Jun 20 19:11:50.155072 kubelet[2667]: W0620 19:11:50.153750 2667 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object Jun 20 19:11:50.155072 kubelet[2667]: E0620 19:11:50.153777 2667 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object" logger="UnhandledError" Jun 20 19:11:50.223840 kubelet[2667]: I0620 19:11:50.223563 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-hostproc\") pod \"cilium-57xnh\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " pod="kube-system/cilium-57xnh" Jun 20 19:11:50.224725 kubelet[2667]: I0620 19:11:50.223879 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47471ebf-3d99-4372-9b55-4baeba3f8df7-clustermesh-secrets\") pod \"cilium-57xnh\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " pod="kube-system/cilium-57xnh" Jun 20 19:11:50.224725 kubelet[2667]: I0620 19:11:50.224113 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-host-proc-sys-net\") pod \"cilium-57xnh\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " pod="kube-system/cilium-57xnh" Jun 20 19:11:50.226950 kubelet[2667]: I0620 19:11:50.226128 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-cilium-run\") pod \"cilium-57xnh\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " pod="kube-system/cilium-57xnh" Jun 20 19:11:50.226950 kubelet[2667]: I0620 19:11:50.226206 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-host-proc-sys-kernel\") pod \"cilium-57xnh\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " pod="kube-system/cilium-57xnh" Jun 20 19:11:50.226950 kubelet[2667]: I0620 19:11:50.226366 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-cni-path\") pod \"cilium-57xnh\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " pod="kube-system/cilium-57xnh" Jun 20 19:11:50.226950 kubelet[2667]: I0620 19:11:50.226424 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47471ebf-3d99-4372-9b55-4baeba3f8df7-hubble-tls\") pod \"cilium-57xnh\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " pod="kube-system/cilium-57xnh" Jun 20 19:11:50.226950 kubelet[2667]: I0620 19:11:50.226454 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w74h\" (UniqueName: \"kubernetes.io/projected/47471ebf-3d99-4372-9b55-4baeba3f8df7-kube-api-access-4w74h\") pod \"cilium-57xnh\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " pod="kube-system/cilium-57xnh" Jun 20 19:11:50.226950 kubelet[2667]: I0620 19:11:50.226487 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-bpf-maps\") pod \"cilium-57xnh\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " pod="kube-system/cilium-57xnh" Jun 20 19:11:50.227366 kubelet[2667]: I0620 19:11:50.226513 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-xtables-lock\") pod \"cilium-57xnh\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " pod="kube-system/cilium-57xnh" Jun 20 19:11:50.227366 kubelet[2667]: I0620 19:11:50.226554 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-etc-cni-netd\") pod \"cilium-57xnh\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " pod="kube-system/cilium-57xnh" Jun 20 19:11:50.227366 kubelet[2667]: I0620 19:11:50.226582 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-lib-modules\") pod \"cilium-57xnh\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " pod="kube-system/cilium-57xnh" Jun 20 19:11:50.227366 kubelet[2667]: I0620 19:11:50.226611 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47471ebf-3d99-4372-9b55-4baeba3f8df7-cilium-config-path\") pod \"cilium-57xnh\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " pod="kube-system/cilium-57xnh" Jun 20 19:11:50.227366 kubelet[2667]: I0620 19:11:50.226730 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-cilium-cgroup\") pod \"cilium-57xnh\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " pod="kube-system/cilium-57xnh" Jun 20 19:11:50.293211 systemd[1]: Created slice kubepods-besteffort-pod9fb0f7b5_e282_4417_8aff_e06a491718c8.slice - libcontainer container kubepods-besteffort-pod9fb0f7b5_e282_4417_8aff_e06a491718c8.slice. Jun 20 19:11:50.327624 kubelet[2667]: I0620 19:11:50.327566 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fb0f7b5-e282-4417-8aff-e06a491718c8-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-r4knb\" (UID: \"9fb0f7b5-e282-4417-8aff-e06a491718c8\") " pod="kube-system/cilium-operator-6c4d7847fc-r4knb" Jun 20 19:11:50.328338 kubelet[2667]: I0620 19:11:50.327845 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsjvm\" (UniqueName: \"kubernetes.io/projected/9fb0f7b5-e282-4417-8aff-e06a491718c8-kube-api-access-dsjvm\") pod \"cilium-operator-6c4d7847fc-r4knb\" (UID: \"9fb0f7b5-e282-4417-8aff-e06a491718c8\") " pod="kube-system/cilium-operator-6c4d7847fc-r4knb" Jun 20 19:11:50.425268 containerd[1476]: time="2025-06-20T19:11:50.425129179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p2gt7,Uid:79781b3d-fc6c-4f35-a311-50786f55b505,Namespace:kube-system,Attempt:0,}" Jun 20 19:11:50.472934 containerd[1476]: time="2025-06-20T19:11:50.472221001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:50.472934 containerd[1476]: time="2025-06-20T19:11:50.472319518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:50.472934 containerd[1476]: time="2025-06-20T19:11:50.472345966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:50.472934 containerd[1476]: time="2025-06-20T19:11:50.472496812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:50.506221 systemd[1]: Started cri-containerd-e61fae71cb779124c83d99d90585c001736818969b24d5bd5d036f43fdb83bff.scope - libcontainer container e61fae71cb779124c83d99d90585c001736818969b24d5bd5d036f43fdb83bff. Jun 20 19:11:50.548024 containerd[1476]: time="2025-06-20T19:11:50.547969703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p2gt7,Uid:79781b3d-fc6c-4f35-a311-50786f55b505,Namespace:kube-system,Attempt:0,} returns sandbox id \"e61fae71cb779124c83d99d90585c001736818969b24d5bd5d036f43fdb83bff\"" Jun 20 19:11:50.553523 containerd[1476]: time="2025-06-20T19:11:50.553462033Z" level=info msg="CreateContainer within sandbox \"e61fae71cb779124c83d99d90585c001736818969b24d5bd5d036f43fdb83bff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:11:50.576308 containerd[1476]: time="2025-06-20T19:11:50.576232054Z" level=info msg="CreateContainer within sandbox \"e61fae71cb779124c83d99d90585c001736818969b24d5bd5d036f43fdb83bff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f1551031e24ef537050de7bfbd1599f66bcf2808089b9f0b2b872db80084cc1b\"" Jun 20 19:11:50.578178 containerd[1476]: time="2025-06-20T19:11:50.577242916Z" level=info msg="StartContainer for \"f1551031e24ef537050de7bfbd1599f66bcf2808089b9f0b2b872db80084cc1b\"" Jun 20 19:11:50.621213 systemd[1]: Started cri-containerd-f1551031e24ef537050de7bfbd1599f66bcf2808089b9f0b2b872db80084cc1b.scope - libcontainer container f1551031e24ef537050de7bfbd1599f66bcf2808089b9f0b2b872db80084cc1b. Jun 20 19:11:50.672025 containerd[1476]: time="2025-06-20T19:11:50.671950527Z" level=info msg="StartContainer for \"f1551031e24ef537050de7bfbd1599f66bcf2808089b9f0b2b872db80084cc1b\" returns successfully" Jun 20 19:11:51.199714 containerd[1476]: time="2025-06-20T19:11:51.199603363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r4knb,Uid:9fb0f7b5-e282-4417-8aff-e06a491718c8,Namespace:kube-system,Attempt:0,}" Jun 20 19:11:51.236773 containerd[1476]: time="2025-06-20T19:11:51.236468671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:51.236773 containerd[1476]: time="2025-06-20T19:11:51.236587046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:51.236773 containerd[1476]: time="2025-06-20T19:11:51.236609335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:51.237153 containerd[1476]: time="2025-06-20T19:11:51.236739157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:51.283213 systemd[1]: Started cri-containerd-5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f.scope - libcontainer container 5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f. Jun 20 19:11:51.340214 containerd[1476]: time="2025-06-20T19:11:51.340128157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r4knb,Uid:9fb0f7b5-e282-4417-8aff-e06a491718c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f\"" Jun 20 19:11:51.343649 containerd[1476]: time="2025-06-20T19:11:51.343436784Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 19:11:51.349703 containerd[1476]: time="2025-06-20T19:11:51.349653602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-57xnh,Uid:47471ebf-3d99-4372-9b55-4baeba3f8df7,Namespace:kube-system,Attempt:0,}" Jun 20 19:11:51.391859 containerd[1476]: time="2025-06-20T19:11:51.391643841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:51.391859 containerd[1476]: time="2025-06-20T19:11:51.391731797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:51.391859 containerd[1476]: time="2025-06-20T19:11:51.391761610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:51.392203 containerd[1476]: time="2025-06-20T19:11:51.391898920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:51.430209 systemd[1]: Started cri-containerd-06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19.scope - libcontainer container 06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19. Jun 20 19:11:51.467854 containerd[1476]: time="2025-06-20T19:11:51.467567832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-57xnh,Uid:47471ebf-3d99-4372-9b55-4baeba3f8df7,Namespace:kube-system,Attempt:0,} returns sandbox id \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\"" Jun 20 19:11:51.820997 kubelet[2667]: I0620 19:11:51.820896 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p2gt7" podStartSLOduration=1.8208705649999999 podStartE2EDuration="1.820870565s" podCreationTimestamp="2025-06-20 19:11:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:11:50.928156607 +0000 UTC m=+5.312169353" watchObservedRunningTime="2025-06-20 19:11:51.820870565 +0000 UTC m=+6.204883314" Jun 20 19:11:52.275113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1440691789.mount: Deactivated successfully. Jun 20 19:11:53.081093 containerd[1476]: time="2025-06-20T19:11:53.081017049Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:53.082533 containerd[1476]: time="2025-06-20T19:11:53.082420536Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 19:11:53.084175 containerd[1476]: time="2025-06-20T19:11:53.084108226Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:11:53.086499 containerd[1476]: time="2025-06-20T19:11:53.086306697Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.742641858s" Jun 20 19:11:53.086499 containerd[1476]: time="2025-06-20T19:11:53.086362667Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 19:11:53.088704 containerd[1476]: time="2025-06-20T19:11:53.088011290Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 19:11:53.089909 containerd[1476]: time="2025-06-20T19:11:53.089687963Z" level=info msg="CreateContainer within sandbox \"5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 19:11:53.112718 containerd[1476]: time="2025-06-20T19:11:53.112665507Z" level=info msg="CreateContainer within sandbox \"5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf\"" Jun 20 19:11:53.113671 containerd[1476]: time="2025-06-20T19:11:53.113610695Z" level=info msg="StartContainer for \"04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf\"" Jun 20 19:11:53.160171 systemd[1]: Started cri-containerd-04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf.scope - libcontainer container 04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf. Jun 20 19:11:53.203132 containerd[1476]: time="2025-06-20T19:11:53.203070697Z" level=info msg="StartContainer for \"04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf\" returns successfully" Jun 20 19:11:58.490356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1980220294.mount: Deactivated successfully. Jun 20 19:12:01.547529 containerd[1476]: time="2025-06-20T19:12:01.547445892Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:12:01.549774 containerd[1476]: time="2025-06-20T19:12:01.549520797Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 19:12:01.551140 containerd[1476]: time="2025-06-20T19:12:01.551056895Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:12:01.553418 containerd[1476]: time="2025-06-20T19:12:01.553365751Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.46530795s" Jun 20 19:12:01.553541 containerd[1476]: time="2025-06-20T19:12:01.553422325Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 19:12:01.556939 containerd[1476]: time="2025-06-20T19:12:01.556574926Z" level=info msg="CreateContainer within sandbox \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:12:01.579606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2477263230.mount: Deactivated successfully. Jun 20 19:12:01.584414 containerd[1476]: time="2025-06-20T19:12:01.584185480Z" level=info msg="CreateContainer within sandbox \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017\"" Jun 20 19:12:01.586939 containerd[1476]: time="2025-06-20T19:12:01.585610096Z" level=info msg="StartContainer for \"b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017\"" Jun 20 19:12:01.639857 systemd[1]: Started cri-containerd-b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017.scope - libcontainer container b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017. Jun 20 19:12:01.682835 containerd[1476]: time="2025-06-20T19:12:01.682744810Z" level=info msg="StartContainer for \"b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017\" returns successfully" Jun 20 19:12:01.701558 systemd[1]: cri-containerd-b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017.scope: Deactivated successfully. Jun 20 19:12:02.001247 kubelet[2667]: I0620 19:12:02.000632 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-r4knb" podStartSLOduration=10.255231808 podStartE2EDuration="12.000606463s" podCreationTimestamp="2025-06-20 19:11:50 +0000 UTC" firstStartedPulling="2025-06-20 19:11:51.342374862 +0000 UTC m=+5.726387598" lastFinishedPulling="2025-06-20 19:11:53.087749518 +0000 UTC m=+7.471762253" observedRunningTime="2025-06-20 19:11:54.011869424 +0000 UTC m=+8.395882169" watchObservedRunningTime="2025-06-20 19:12:02.000606463 +0000 UTC m=+16.384619237" Jun 20 19:12:02.570538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017-rootfs.mount: Deactivated successfully. Jun 20 19:12:03.959795 containerd[1476]: time="2025-06-20T19:12:03.959692794Z" level=info msg="shim disconnected" id=b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017 namespace=k8s.io Jun 20 19:12:03.959795 containerd[1476]: time="2025-06-20T19:12:03.959768543Z" level=warning msg="cleaning up after shim disconnected" id=b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017 namespace=k8s.io Jun 20 19:12:03.959795 containerd[1476]: time="2025-06-20T19:12:03.959783895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:12:04.987659 containerd[1476]: time="2025-06-20T19:12:04.987398379Z" level=info msg="CreateContainer within sandbox \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:12:05.015016 containerd[1476]: time="2025-06-20T19:12:05.012634062Z" level=info msg="CreateContainer within sandbox \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef\"" Jun 20 19:12:05.016974 containerd[1476]: time="2025-06-20T19:12:05.016570483Z" level=info msg="StartContainer for \"a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef\"" Jun 20 19:12:05.016633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount68182411.mount: Deactivated successfully. Jun 20 19:12:05.074214 systemd[1]: Started cri-containerd-a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef.scope - libcontainer container a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef. Jun 20 19:12:05.114381 containerd[1476]: time="2025-06-20T19:12:05.114243060Z" level=info msg="StartContainer for \"a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef\" returns successfully" Jun 20 19:12:05.134254 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:12:05.134565 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:12:05.136456 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:12:05.144301 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:12:05.150045 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:12:05.151778 systemd[1]: cri-containerd-a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef.scope: Deactivated successfully. Jun 20 19:12:05.189444 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:12:05.192719 containerd[1476]: time="2025-06-20T19:12:05.192642849Z" level=info msg="shim disconnected" id=a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef namespace=k8s.io Jun 20 19:12:05.193351 containerd[1476]: time="2025-06-20T19:12:05.193115751Z" level=warning msg="cleaning up after shim disconnected" id=a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef namespace=k8s.io Jun 20 19:12:05.193522 containerd[1476]: time="2025-06-20T19:12:05.193484319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:12:05.993180 containerd[1476]: time="2025-06-20T19:12:05.992654569Z" level=info msg="CreateContainer within sandbox \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:12:06.007689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef-rootfs.mount: Deactivated successfully. Jun 20 19:12:06.028837 containerd[1476]: time="2025-06-20T19:12:06.028756240Z" level=info msg="CreateContainer within sandbox \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618\"" Jun 20 19:12:06.031091 containerd[1476]: time="2025-06-20T19:12:06.031025740Z" level=info msg="StartContainer for \"a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618\"" Jun 20 19:12:06.102229 systemd[1]: Started cri-containerd-a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618.scope - libcontainer container a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618. Jun 20 19:12:06.157645 containerd[1476]: time="2025-06-20T19:12:06.157578006Z" level=info msg="StartContainer for \"a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618\" returns successfully" Jun 20 19:12:06.163355 systemd[1]: cri-containerd-a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618.scope: Deactivated successfully. Jun 20 19:12:06.197301 containerd[1476]: time="2025-06-20T19:12:06.197220290Z" level=info msg="shim disconnected" id=a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618 namespace=k8s.io Jun 20 19:12:06.197301 containerd[1476]: time="2025-06-20T19:12:06.197296496Z" level=warning msg="cleaning up after shim disconnected" id=a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618 namespace=k8s.io Jun 20 19:12:06.197301 containerd[1476]: time="2025-06-20T19:12:06.197309530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:12:06.997208 containerd[1476]: time="2025-06-20T19:12:06.997147090Z" level=info msg="CreateContainer within sandbox \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:12:07.007183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618-rootfs.mount: Deactivated successfully. Jun 20 19:12:07.033020 containerd[1476]: time="2025-06-20T19:12:07.030259079Z" level=info msg="CreateContainer within sandbox \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e\"" Jun 20 19:12:07.033899 containerd[1476]: time="2025-06-20T19:12:07.033753245Z" level=info msg="StartContainer for \"d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e\"" Jun 20 19:12:07.089937 systemd[1]: run-containerd-runc-k8s.io-d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e-runc.jY6uFi.mount: Deactivated successfully. Jun 20 19:12:07.101237 systemd[1]: Started cri-containerd-d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e.scope - libcontainer container d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e. Jun 20 19:12:07.137718 systemd[1]: cri-containerd-d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e.scope: Deactivated successfully. Jun 20 19:12:07.142594 containerd[1476]: time="2025-06-20T19:12:07.142432558Z" level=info msg="StartContainer for \"d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e\" returns successfully" Jun 20 19:12:07.176248 containerd[1476]: time="2025-06-20T19:12:07.176154797Z" level=info msg="shim disconnected" id=d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e namespace=k8s.io Jun 20 19:12:07.176248 containerd[1476]: time="2025-06-20T19:12:07.176232519Z" level=warning msg="cleaning up after shim disconnected" id=d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e namespace=k8s.io Jun 20 19:12:07.176248 containerd[1476]: time="2025-06-20T19:12:07.176248312Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:12:08.010971 containerd[1476]: time="2025-06-20T19:12:08.009357571Z" level=info msg="CreateContainer within sandbox \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:12:08.009784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e-rootfs.mount: Deactivated successfully. Jun 20 19:12:08.039170 containerd[1476]: time="2025-06-20T19:12:08.038556323Z" level=info msg="CreateContainer within sandbox \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c\"" Jun 20 19:12:08.041121 containerd[1476]: time="2025-06-20T19:12:08.040958564Z" level=info msg="StartContainer for \"b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c\"" Jun 20 19:12:08.097241 systemd[1]: Started cri-containerd-b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c.scope - libcontainer container b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c. Jun 20 19:12:08.145056 containerd[1476]: time="2025-06-20T19:12:08.144980652Z" level=info msg="StartContainer for \"b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c\" returns successfully" Jun 20 19:12:08.334114 kubelet[2667]: I0620 19:12:08.333061 2667 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:12:08.386809 kubelet[2667]: I0620 19:12:08.386748 2667 status_manager.go:890] "Failed to get status for pod" podUID="62dd3dd1-fcae-469f-8641-c0bdc2e6a0be" pod="kube-system/coredns-668d6bf9bc-zfl99" err="pods \"coredns-668d6bf9bc-zfl99\" is forbidden: User \"system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object" Jun 20 19:12:08.387077 kubelet[2667]: W0620 19:12:08.387055 2667 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object Jun 20 19:12:08.387216 kubelet[2667]: E0620 19:12:08.387100 2667 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object" logger="UnhandledError" Jun 20 19:12:08.390540 systemd[1]: Created slice kubepods-burstable-pod62dd3dd1_fcae_469f_8641_c0bdc2e6a0be.slice - libcontainer container kubepods-burstable-pod62dd3dd1_fcae_469f_8641_c0bdc2e6a0be.slice. Jun 20 19:12:08.404526 kubelet[2667]: I0620 19:12:08.404453 2667 status_manager.go:890] "Failed to get status for pod" podUID="62dd3dd1-fcae-469f-8641-c0bdc2e6a0be" pod="kube-system/coredns-668d6bf9bc-zfl99" err="pods \"coredns-668d6bf9bc-zfl99\" is forbidden: User \"system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object" Jun 20 19:12:08.417992 systemd[1]: Created slice kubepods-burstable-pod043e54ac_30cd_45fa_8a15_010c55524474.slice - libcontainer container kubepods-burstable-pod043e54ac_30cd_45fa_8a15_010c55524474.slice. Jun 20 19:12:08.476290 kubelet[2667]: I0620 19:12:08.476227 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62dd3dd1-fcae-469f-8641-c0bdc2e6a0be-config-volume\") pod \"coredns-668d6bf9bc-zfl99\" (UID: \"62dd3dd1-fcae-469f-8641-c0bdc2e6a0be\") " pod="kube-system/coredns-668d6bf9bc-zfl99" Jun 20 19:12:08.476477 kubelet[2667]: I0620 19:12:08.476302 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbcmd\" (UniqueName: \"kubernetes.io/projected/043e54ac-30cd-45fa-8a15-010c55524474-kube-api-access-jbcmd\") pod \"coredns-668d6bf9bc-wnrpc\" (UID: \"043e54ac-30cd-45fa-8a15-010c55524474\") " pod="kube-system/coredns-668d6bf9bc-wnrpc" Jun 20 19:12:08.476477 kubelet[2667]: I0620 19:12:08.476346 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/043e54ac-30cd-45fa-8a15-010c55524474-config-volume\") pod \"coredns-668d6bf9bc-wnrpc\" (UID: \"043e54ac-30cd-45fa-8a15-010c55524474\") " pod="kube-system/coredns-668d6bf9bc-wnrpc" Jun 20 19:12:08.476477 kubelet[2667]: I0620 19:12:08.476377 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcvl8\" (UniqueName: \"kubernetes.io/projected/62dd3dd1-fcae-469f-8641-c0bdc2e6a0be-kube-api-access-xcvl8\") pod \"coredns-668d6bf9bc-zfl99\" (UID: \"62dd3dd1-fcae-469f-8641-c0bdc2e6a0be\") " pod="kube-system/coredns-668d6bf9bc-zfl99" Jun 20 19:12:09.581950 kubelet[2667]: E0620 19:12:09.580276 2667 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jun 20 19:12:09.581950 kubelet[2667]: E0620 19:12:09.580412 2667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62dd3dd1-fcae-469f-8641-c0bdc2e6a0be-config-volume podName:62dd3dd1-fcae-469f-8641-c0bdc2e6a0be nodeName:}" failed. No retries permitted until 2025-06-20 19:12:10.080379159 +0000 UTC m=+24.464391902 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/62dd3dd1-fcae-469f-8641-c0bdc2e6a0be-config-volume") pod "coredns-668d6bf9bc-zfl99" (UID: "62dd3dd1-fcae-469f-8641-c0bdc2e6a0be") : failed to sync configmap cache: timed out waiting for the condition Jun 20 19:12:09.581950 kubelet[2667]: E0620 19:12:09.580692 2667 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jun 20 19:12:09.581950 kubelet[2667]: E0620 19:12:09.580741 2667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/043e54ac-30cd-45fa-8a15-010c55524474-config-volume podName:043e54ac-30cd-45fa-8a15-010c55524474 nodeName:}" failed. No retries permitted until 2025-06-20 19:12:10.080724966 +0000 UTC m=+24.464737702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/043e54ac-30cd-45fa-8a15-010c55524474-config-volume") pod "coredns-668d6bf9bc-wnrpc" (UID: "043e54ac-30cd-45fa-8a15-010c55524474") : failed to sync configmap cache: timed out waiting for the condition Jun 20 19:12:10.206017 containerd[1476]: time="2025-06-20T19:12:10.205899317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zfl99,Uid:62dd3dd1-fcae-469f-8641-c0bdc2e6a0be,Namespace:kube-system,Attempt:0,}" Jun 20 19:12:10.226970 containerd[1476]: time="2025-06-20T19:12:10.225702457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wnrpc,Uid:043e54ac-30cd-45fa-8a15-010c55524474,Namespace:kube-system,Attempt:0,}" Jun 20 19:12:10.491700 systemd-networkd[1367]: cilium_host: Link UP Jun 20 19:12:10.492401 systemd-networkd[1367]: cilium_net: Link UP Jun 20 19:12:10.492691 systemd-networkd[1367]: cilium_net: Gained carrier Jun 20 19:12:10.493006 systemd-networkd[1367]: cilium_host: Gained carrier Jun 20 19:12:10.653120 systemd-networkd[1367]: cilium_vxlan: Link UP Jun 20 19:12:10.653133 systemd-networkd[1367]: cilium_vxlan: Gained carrier Jun 20 19:12:10.673044 systemd-networkd[1367]: cilium_net: Gained IPv6LL Jun 20 19:12:10.807103 systemd-networkd[1367]: cilium_host: Gained IPv6LL Jun 20 19:12:10.947122 kernel: NET: Registered PF_ALG protocol family Jun 20 19:12:11.854667 systemd-networkd[1367]: lxc_health: Link UP Jun 20 19:12:11.865222 systemd-networkd[1367]: lxc_health: Gained carrier Jun 20 19:12:12.265731 systemd-networkd[1367]: lxcdf613522221d: Link UP Jun 20 19:12:12.276974 kernel: eth0: renamed from tmp17666 Jun 20 19:12:12.286141 systemd-networkd[1367]: lxcdf613522221d: Gained carrier Jun 20 19:12:12.338011 kernel: eth0: renamed from tmpa43d4 Jun 20 19:12:12.338711 systemd-networkd[1367]: lxcc47e4e07ea68: Link UP Jun 20 19:12:12.348678 systemd-networkd[1367]: lxcc47e4e07ea68: Gained carrier Jun 20 19:12:12.463197 systemd-networkd[1367]: cilium_vxlan: Gained IPv6LL Jun 20 19:12:13.391136 kubelet[2667]: I0620 19:12:13.391050 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-57xnh" podStartSLOduration=13.305739301 podStartE2EDuration="23.390669121s" podCreationTimestamp="2025-06-20 19:11:50 +0000 UTC" firstStartedPulling="2025-06-20 19:11:51.469839794 +0000 UTC m=+5.853852527" lastFinishedPulling="2025-06-20 19:12:01.554769629 +0000 UTC m=+15.938782347" observedRunningTime="2025-06-20 19:12:09.045717826 +0000 UTC m=+23.429730572" watchObservedRunningTime="2025-06-20 19:12:13.390669121 +0000 UTC m=+27.774681868" Jun 20 19:12:13.551305 systemd-networkd[1367]: lxcc47e4e07ea68: Gained IPv6LL Jun 20 19:12:13.807239 systemd-networkd[1367]: lxc_health: Gained IPv6LL Jun 20 19:12:14.255231 systemd-networkd[1367]: lxcdf613522221d: Gained IPv6LL Jun 20 19:12:16.333808 ntpd[1439]: Listen normally on 8 cilium_host 192.168.0.153:123 Jun 20 19:12:16.333964 ntpd[1439]: Listen normally on 9 cilium_net [fe80::b038:eeff:fe09:f8a2%4]:123 Jun 20 19:12:16.334429 ntpd[1439]: 20 Jun 19:12:16 ntpd[1439]: Listen normally on 8 cilium_host 192.168.0.153:123 Jun 20 19:12:16.334429 ntpd[1439]: 20 Jun 19:12:16 ntpd[1439]: Listen normally on 9 cilium_net [fe80::b038:eeff:fe09:f8a2%4]:123 Jun 20 19:12:16.334429 ntpd[1439]: 20 Jun 19:12:16 ntpd[1439]: Listen normally on 10 cilium_host [fe80::a4f3:9eff:fe11:e8e2%5]:123 Jun 20 19:12:16.334429 ntpd[1439]: 20 Jun 19:12:16 ntpd[1439]: Listen normally on 11 cilium_vxlan [fe80::451:75ff:fe21:a7cf%6]:123 Jun 20 19:12:16.334429 ntpd[1439]: 20 Jun 19:12:16 ntpd[1439]: Listen normally on 12 lxc_health [fe80::805e:b4ff:fe68:b2d2%8]:123 Jun 20 19:12:16.334429 ntpd[1439]: 20 Jun 19:12:16 ntpd[1439]: Listen normally on 13 lxcdf613522221d [fe80::2cef:e8ff:fe0b:108a%10]:123 Jun 20 19:12:16.334429 ntpd[1439]: 20 Jun 19:12:16 ntpd[1439]: Listen normally on 14 lxcc47e4e07ea68 [fe80::f0ca:2bff:fe27:3e73%12]:123 Jun 20 19:12:16.334048 ntpd[1439]: Listen normally on 10 cilium_host [fe80::a4f3:9eff:fe11:e8e2%5]:123 Jun 20 19:12:16.334114 ntpd[1439]: Listen normally on 11 cilium_vxlan [fe80::451:75ff:fe21:a7cf%6]:123 Jun 20 19:12:16.334179 ntpd[1439]: Listen normally on 12 lxc_health [fe80::805e:b4ff:fe68:b2d2%8]:123 Jun 20 19:12:16.334297 ntpd[1439]: Listen normally on 13 lxcdf613522221d [fe80::2cef:e8ff:fe0b:108a%10]:123 Jun 20 19:12:16.334360 ntpd[1439]: Listen normally on 14 lxcc47e4e07ea68 [fe80::f0ca:2bff:fe27:3e73%12]:123 Jun 20 19:12:17.698655 containerd[1476]: time="2025-06-20T19:12:17.698223190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:12:17.698655 containerd[1476]: time="2025-06-20T19:12:17.698311206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:12:17.698655 containerd[1476]: time="2025-06-20T19:12:17.698338745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:12:17.698655 containerd[1476]: time="2025-06-20T19:12:17.698471681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:12:17.701240 containerd[1476]: time="2025-06-20T19:12:17.700380367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:12:17.701240 containerd[1476]: time="2025-06-20T19:12:17.700446522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:12:17.701240 containerd[1476]: time="2025-06-20T19:12:17.700475455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:12:17.701240 containerd[1476]: time="2025-06-20T19:12:17.700590237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:12:17.777627 systemd[1]: run-containerd-runc-k8s.io-17666ef28b2bde49e11e0dd79e80ae6c28b4d2abca7065519f814f1f13f5c9ab-runc.sBpO4b.mount: Deactivated successfully. Jun 20 19:12:17.790301 systemd[1]: Started cri-containerd-17666ef28b2bde49e11e0dd79e80ae6c28b4d2abca7065519f814f1f13f5c9ab.scope - libcontainer container 17666ef28b2bde49e11e0dd79e80ae6c28b4d2abca7065519f814f1f13f5c9ab. Jun 20 19:12:17.813235 systemd[1]: Started cri-containerd-a43d40ec0fe5c4a545d64cb588926265849d287163903e9536d348c2ca0cc771.scope - libcontainer container a43d40ec0fe5c4a545d64cb588926265849d287163903e9536d348c2ca0cc771. Jun 20 19:12:17.942513 containerd[1476]: time="2025-06-20T19:12:17.942456087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zfl99,Uid:62dd3dd1-fcae-469f-8641-c0bdc2e6a0be,Namespace:kube-system,Attempt:0,} returns sandbox id \"17666ef28b2bde49e11e0dd79e80ae6c28b4d2abca7065519f814f1f13f5c9ab\"" Jun 20 19:12:17.965420 containerd[1476]: time="2025-06-20T19:12:17.965168468Z" level=info msg="CreateContainer within sandbox \"17666ef28b2bde49e11e0dd79e80ae6c28b4d2abca7065519f814f1f13f5c9ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:12:17.994664 containerd[1476]: time="2025-06-20T19:12:17.994406369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wnrpc,Uid:043e54ac-30cd-45fa-8a15-010c55524474,Namespace:kube-system,Attempt:0,} returns sandbox id \"a43d40ec0fe5c4a545d64cb588926265849d287163903e9536d348c2ca0cc771\"" Jun 20 19:12:18.003586 containerd[1476]: time="2025-06-20T19:12:18.003182949Z" level=info msg="CreateContainer within sandbox \"a43d40ec0fe5c4a545d64cb588926265849d287163903e9536d348c2ca0cc771\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:12:18.007882 containerd[1476]: time="2025-06-20T19:12:18.006015711Z" level=info msg="CreateContainer within sandbox \"17666ef28b2bde49e11e0dd79e80ae6c28b4d2abca7065519f814f1f13f5c9ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66d35cd15a10284f24f8b7a7ee0b1ff475e0f91ba2c5994ed0ea864d3a11d674\"" Jun 20 19:12:18.010451 containerd[1476]: time="2025-06-20T19:12:18.010408789Z" level=info msg="StartContainer for \"66d35cd15a10284f24f8b7a7ee0b1ff475e0f91ba2c5994ed0ea864d3a11d674\"" Jun 20 19:12:18.044781 containerd[1476]: time="2025-06-20T19:12:18.044724996Z" level=info msg="CreateContainer within sandbox \"a43d40ec0fe5c4a545d64cb588926265849d287163903e9536d348c2ca0cc771\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ca2e1ced518222021503c9d3a8b71a1b48dbc3a806a13428f26ab14968e95ea\"" Jun 20 19:12:18.047943 containerd[1476]: time="2025-06-20T19:12:18.047799309Z" level=info msg="StartContainer for \"8ca2e1ced518222021503c9d3a8b71a1b48dbc3a806a13428f26ab14968e95ea\"" Jun 20 19:12:18.074232 systemd[1]: Started cri-containerd-66d35cd15a10284f24f8b7a7ee0b1ff475e0f91ba2c5994ed0ea864d3a11d674.scope - libcontainer container 66d35cd15a10284f24f8b7a7ee0b1ff475e0f91ba2c5994ed0ea864d3a11d674. Jun 20 19:12:18.100290 systemd[1]: Started cri-containerd-8ca2e1ced518222021503c9d3a8b71a1b48dbc3a806a13428f26ab14968e95ea.scope - libcontainer container 8ca2e1ced518222021503c9d3a8b71a1b48dbc3a806a13428f26ab14968e95ea. Jun 20 19:12:18.143840 containerd[1476]: time="2025-06-20T19:12:18.143683098Z" level=info msg="StartContainer for \"66d35cd15a10284f24f8b7a7ee0b1ff475e0f91ba2c5994ed0ea864d3a11d674\" returns successfully" Jun 20 19:12:18.170239 containerd[1476]: time="2025-06-20T19:12:18.170168230Z" level=info msg="StartContainer for \"8ca2e1ced518222021503c9d3a8b71a1b48dbc3a806a13428f26ab14968e95ea\" returns successfully" Jun 20 19:12:19.071861 kubelet[2667]: I0620 19:12:19.071160 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wnrpc" podStartSLOduration=29.071136426 podStartE2EDuration="29.071136426s" podCreationTimestamp="2025-06-20 19:11:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:12:19.070346392 +0000 UTC m=+33.454359140" watchObservedRunningTime="2025-06-20 19:12:19.071136426 +0000 UTC m=+33.455149171" Jun 20 19:12:19.123074 kubelet[2667]: I0620 19:12:19.122997 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zfl99" podStartSLOduration=29.122969854 podStartE2EDuration="29.122969854s" podCreationTimestamp="2025-06-20 19:11:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:12:19.097290992 +0000 UTC m=+33.481303738" watchObservedRunningTime="2025-06-20 19:12:19.122969854 +0000 UTC m=+33.506982594" Jun 20 19:12:35.107495 systemd[1]: Started sshd@9-10.128.0.67:22-147.75.109.163:56530.service - OpenSSH per-connection server daemon (147.75.109.163:56530). Jun 20 19:12:35.417203 sshd[4041]: Accepted publickey for core from 147.75.109.163 port 56530 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:35.420594 sshd-session[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:35.428161 systemd-logind[1458]: New session 10 of user core. Jun 20 19:12:35.432626 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:12:35.753222 sshd[4043]: Connection closed by 147.75.109.163 port 56530 Jun 20 19:12:35.754644 sshd-session[4041]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:35.761065 systemd[1]: sshd@9-10.128.0.67:22-147.75.109.163:56530.service: Deactivated successfully. Jun 20 19:12:35.764334 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:12:35.765689 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:12:35.767280 systemd-logind[1458]: Removed session 10. Jun 20 19:12:40.814363 systemd[1]: Started sshd@10-10.128.0.67:22-147.75.109.163:44338.service - OpenSSH per-connection server daemon (147.75.109.163:44338). Jun 20 19:12:41.119959 sshd[4059]: Accepted publickey for core from 147.75.109.163 port 44338 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:41.122017 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:41.129645 systemd-logind[1458]: New session 11 of user core. Jun 20 19:12:41.137218 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:12:41.418414 sshd[4061]: Connection closed by 147.75.109.163 port 44338 Jun 20 19:12:41.419753 sshd-session[4059]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:41.425621 systemd[1]: sshd@10-10.128.0.67:22-147.75.109.163:44338.service: Deactivated successfully. Jun 20 19:12:41.428641 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:12:41.429838 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:12:41.431479 systemd-logind[1458]: Removed session 11. Jun 20 19:12:46.479395 systemd[1]: Started sshd@11-10.128.0.67:22-147.75.109.163:58962.service - OpenSSH per-connection server daemon (147.75.109.163:58962). Jun 20 19:12:46.786329 sshd[4076]: Accepted publickey for core from 147.75.109.163 port 58962 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:46.788248 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:46.795596 systemd-logind[1458]: New session 12 of user core. Jun 20 19:12:46.799179 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:12:47.084523 sshd[4078]: Connection closed by 147.75.109.163 port 58962 Jun 20 19:12:47.085721 sshd-session[4076]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:47.090219 systemd[1]: sshd@11-10.128.0.67:22-147.75.109.163:58962.service: Deactivated successfully. Jun 20 19:12:47.093284 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:12:47.095691 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:12:47.097462 systemd-logind[1458]: Removed session 12. Jun 20 19:12:52.145395 systemd[1]: Started sshd@12-10.128.0.67:22-147.75.109.163:58976.service - OpenSSH per-connection server daemon (147.75.109.163:58976). Jun 20 19:12:52.436452 sshd[4096]: Accepted publickey for core from 147.75.109.163 port 58976 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:52.438526 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:52.444680 systemd-logind[1458]: New session 13 of user core. Jun 20 19:12:52.450182 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:12:52.739983 sshd[4098]: Connection closed by 147.75.109.163 port 58976 Jun 20 19:12:52.741418 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:52.747104 systemd[1]: sshd@12-10.128.0.67:22-147.75.109.163:58976.service: Deactivated successfully. Jun 20 19:12:52.750826 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:12:52.752308 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:12:52.753971 systemd-logind[1458]: Removed session 13. Jun 20 19:12:52.800364 systemd[1]: Started sshd@13-10.128.0.67:22-147.75.109.163:58988.service - OpenSSH per-connection server daemon (147.75.109.163:58988). Jun 20 19:12:53.106988 sshd[4111]: Accepted publickey for core from 147.75.109.163 port 58988 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:53.109031 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:53.115621 systemd-logind[1458]: New session 14 of user core. Jun 20 19:12:53.122214 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:12:53.451301 sshd[4113]: Connection closed by 147.75.109.163 port 58988 Jun 20 19:12:53.453095 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:53.457777 systemd[1]: sshd@13-10.128.0.67:22-147.75.109.163:58988.service: Deactivated successfully. Jun 20 19:12:53.460990 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:12:53.463584 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:12:53.465319 systemd-logind[1458]: Removed session 14. Jun 20 19:12:53.515379 systemd[1]: Started sshd@14-10.128.0.67:22-147.75.109.163:59002.service - OpenSSH per-connection server daemon (147.75.109.163:59002). Jun 20 19:12:53.826475 sshd[4123]: Accepted publickey for core from 147.75.109.163 port 59002 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:53.828567 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:53.838263 systemd-logind[1458]: New session 15 of user core. Jun 20 19:12:53.845501 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:12:54.126445 sshd[4125]: Connection closed by 147.75.109.163 port 59002 Jun 20 19:12:54.127798 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:54.132536 systemd[1]: sshd@14-10.128.0.67:22-147.75.109.163:59002.service: Deactivated successfully. Jun 20 19:12:54.135585 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:12:54.138465 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:12:54.140457 systemd-logind[1458]: Removed session 15. Jun 20 19:12:59.185438 systemd[1]: Started sshd@15-10.128.0.67:22-147.75.109.163:54652.service - OpenSSH per-connection server daemon (147.75.109.163:54652). Jun 20 19:12:59.479583 sshd[4138]: Accepted publickey for core from 147.75.109.163 port 54652 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:12:59.481238 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:12:59.487575 systemd-logind[1458]: New session 16 of user core. Jun 20 19:12:59.500385 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:12:59.774769 sshd[4140]: Connection closed by 147.75.109.163 port 54652 Jun 20 19:12:59.775827 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Jun 20 19:12:59.781468 systemd[1]: sshd@15-10.128.0.67:22-147.75.109.163:54652.service: Deactivated successfully. Jun 20 19:12:59.785235 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:12:59.786592 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:12:59.788418 systemd-logind[1458]: Removed session 16. Jun 20 19:13:04.833417 systemd[1]: Started sshd@16-10.128.0.67:22-147.75.109.163:54654.service - OpenSSH per-connection server daemon (147.75.109.163:54654). Jun 20 19:13:05.126352 sshd[4152]: Accepted publickey for core from 147.75.109.163 port 54654 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:05.128317 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:05.135018 systemd-logind[1458]: New session 17 of user core. Jun 20 19:13:05.139172 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:13:05.421438 sshd[4154]: Connection closed by 147.75.109.163 port 54654 Jun 20 19:13:05.422778 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:05.428727 systemd[1]: sshd@16-10.128.0.67:22-147.75.109.163:54654.service: Deactivated successfully. Jun 20 19:13:05.431857 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:13:05.433731 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:13:05.435663 systemd-logind[1458]: Removed session 17. Jun 20 19:13:05.479346 systemd[1]: Started sshd@17-10.128.0.67:22-147.75.109.163:54656.service - OpenSSH per-connection server daemon (147.75.109.163:54656). Jun 20 19:13:05.784078 sshd[4166]: Accepted publickey for core from 147.75.109.163 port 54656 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:05.786687 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:05.793642 systemd-logind[1458]: New session 18 of user core. Jun 20 19:13:05.797126 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:13:06.183292 sshd[4168]: Connection closed by 147.75.109.163 port 54656 Jun 20 19:13:06.184802 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:06.190402 systemd[1]: sshd@17-10.128.0.67:22-147.75.109.163:54656.service: Deactivated successfully. Jun 20 19:13:06.193266 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:13:06.194530 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:13:06.196241 systemd-logind[1458]: Removed session 18. Jun 20 19:13:06.241390 systemd[1]: Started sshd@18-10.128.0.67:22-147.75.109.163:42372.service - OpenSSH per-connection server daemon (147.75.109.163:42372). Jun 20 19:13:06.542277 sshd[4177]: Accepted publickey for core from 147.75.109.163 port 42372 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:06.544147 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:06.550352 systemd-logind[1458]: New session 19 of user core. Jun 20 19:13:06.561237 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:13:07.463871 sshd[4179]: Connection closed by 147.75.109.163 port 42372 Jun 20 19:13:07.464883 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:07.470693 systemd[1]: sshd@18-10.128.0.67:22-147.75.109.163:42372.service: Deactivated successfully. Jun 20 19:13:07.473725 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:13:07.475126 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:13:07.476621 systemd-logind[1458]: Removed session 19. Jun 20 19:13:07.526410 systemd[1]: Started sshd@19-10.128.0.67:22-147.75.109.163:42376.service - OpenSSH per-connection server daemon (147.75.109.163:42376). Jun 20 19:13:07.823244 sshd[4197]: Accepted publickey for core from 147.75.109.163 port 42376 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:07.825502 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:07.840256 systemd-logind[1458]: New session 20 of user core. Jun 20 19:13:07.845201 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:13:08.257798 sshd[4199]: Connection closed by 147.75.109.163 port 42376 Jun 20 19:13:08.259138 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:08.263535 systemd[1]: sshd@19-10.128.0.67:22-147.75.109.163:42376.service: Deactivated successfully. Jun 20 19:13:08.266843 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:13:08.269255 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:13:08.271134 systemd-logind[1458]: Removed session 20. Jun 20 19:13:08.317393 systemd[1]: Started sshd@20-10.128.0.67:22-147.75.109.163:42388.service - OpenSSH per-connection server daemon (147.75.109.163:42388). Jun 20 19:13:08.622653 sshd[4209]: Accepted publickey for core from 147.75.109.163 port 42388 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:08.624553 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:08.631051 systemd-logind[1458]: New session 21 of user core. Jun 20 19:13:08.638237 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:13:08.920246 sshd[4211]: Connection closed by 147.75.109.163 port 42388 Jun 20 19:13:08.921521 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:08.930073 systemd[1]: sshd@20-10.128.0.67:22-147.75.109.163:42388.service: Deactivated successfully. Jun 20 19:13:08.933790 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:13:08.934958 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:13:08.936605 systemd-logind[1458]: Removed session 21. Jun 20 19:13:13.977388 systemd[1]: Started sshd@21-10.128.0.67:22-147.75.109.163:42400.service - OpenSSH per-connection server daemon (147.75.109.163:42400). Jun 20 19:13:14.280222 sshd[4225]: Accepted publickey for core from 147.75.109.163 port 42400 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:14.282176 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:14.289486 systemd-logind[1458]: New session 22 of user core. Jun 20 19:13:14.297186 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:13:14.572593 sshd[4227]: Connection closed by 147.75.109.163 port 42400 Jun 20 19:13:14.573987 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:14.584361 systemd[1]: sshd@21-10.128.0.67:22-147.75.109.163:42400.service: Deactivated successfully. Jun 20 19:13:14.588059 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:13:14.589336 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:13:14.591223 systemd-logind[1458]: Removed session 22. Jun 20 19:13:19.631385 systemd[1]: Started sshd@22-10.128.0.67:22-147.75.109.163:40532.service - OpenSSH per-connection server daemon (147.75.109.163:40532). Jun 20 19:13:19.936156 sshd[4239]: Accepted publickey for core from 147.75.109.163 port 40532 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:19.938037 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:19.945173 systemd-logind[1458]: New session 23 of user core. Jun 20 19:13:19.953330 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:13:20.228372 sshd[4241]: Connection closed by 147.75.109.163 port 40532 Jun 20 19:13:20.229863 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:20.234848 systemd[1]: sshd@22-10.128.0.67:22-147.75.109.163:40532.service: Deactivated successfully. Jun 20 19:13:20.238353 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:13:20.240984 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:13:20.242721 systemd-logind[1458]: Removed session 23. Jun 20 19:13:25.284345 systemd[1]: Started sshd@23-10.128.0.67:22-147.75.109.163:40548.service - OpenSSH per-connection server daemon (147.75.109.163:40548). Jun 20 19:13:25.591815 sshd[4255]: Accepted publickey for core from 147.75.109.163 port 40548 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:25.593765 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:25.600004 systemd-logind[1458]: New session 24 of user core. Jun 20 19:13:25.607231 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:13:25.891616 sshd[4257]: Connection closed by 147.75.109.163 port 40548 Jun 20 19:13:25.893375 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:25.898067 systemd[1]: sshd@23-10.128.0.67:22-147.75.109.163:40548.service: Deactivated successfully. Jun 20 19:13:25.901821 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:13:25.904602 systemd-logind[1458]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:13:25.906545 systemd-logind[1458]: Removed session 24. Jun 20 19:13:25.951406 systemd[1]: Started sshd@24-10.128.0.67:22-147.75.109.163:39496.service - OpenSSH per-connection server daemon (147.75.109.163:39496). Jun 20 19:13:26.244760 sshd[4269]: Accepted publickey for core from 147.75.109.163 port 39496 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:26.246850 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:26.253110 systemd-logind[1458]: New session 25 of user core. Jun 20 19:13:26.259158 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 19:13:27.865540 containerd[1476]: time="2025-06-20T19:13:27.864160827Z" level=info msg="StopContainer for \"04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf\" with timeout 30 (s)" Jun 20 19:13:27.868636 containerd[1476]: time="2025-06-20T19:13:27.867292903Z" level=info msg="Stop container \"04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf\" with signal terminated" Jun 20 19:13:27.897752 systemd[1]: cri-containerd-04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf.scope: Deactivated successfully. Jun 20 19:13:27.910570 containerd[1476]: time="2025-06-20T19:13:27.909745057Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:13:27.930399 containerd[1476]: time="2025-06-20T19:13:27.930342195Z" level=info msg="StopContainer for \"b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c\" with timeout 2 (s)" Jun 20 19:13:27.930979 containerd[1476]: time="2025-06-20T19:13:27.930797326Z" level=info msg="Stop container \"b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c\" with signal terminated" Jun 20 19:13:27.947219 systemd-networkd[1367]: lxc_health: Link DOWN Jun 20 19:13:27.947235 systemd-networkd[1367]: lxc_health: Lost carrier Jun 20 19:13:27.962038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf-rootfs.mount: Deactivated successfully. Jun 20 19:13:27.975615 systemd[1]: cri-containerd-b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c.scope: Deactivated successfully. Jun 20 19:13:27.976698 systemd[1]: cri-containerd-b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c.scope: Consumed 9.885s CPU time, 122.9M memory peak, 136K read from disk, 13.3M written to disk. Jun 20 19:13:27.989262 containerd[1476]: time="2025-06-20T19:13:27.988910610Z" level=info msg="shim disconnected" id=04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf namespace=k8s.io Jun 20 19:13:27.989553 containerd[1476]: time="2025-06-20T19:13:27.989206813Z" level=warning msg="cleaning up after shim disconnected" id=04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf namespace=k8s.io Jun 20 19:13:27.989553 containerd[1476]: time="2025-06-20T19:13:27.989335822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:13:28.028793 containerd[1476]: time="2025-06-20T19:13:28.027443205Z" level=info msg="StopContainer for \"04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf\" returns successfully" Jun 20 19:13:28.027871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c-rootfs.mount: Deactivated successfully. Jun 20 19:13:28.032889 containerd[1476]: time="2025-06-20T19:13:28.029480318Z" level=info msg="StopPodSandbox for \"5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f\"" Jun 20 19:13:28.032889 containerd[1476]: time="2025-06-20T19:13:28.029538067Z" level=info msg="Container to stop \"04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:13:28.035713 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f-shm.mount: Deactivated successfully. Jun 20 19:13:28.036094 containerd[1476]: time="2025-06-20T19:13:28.035947406Z" level=info msg="shim disconnected" id=b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c namespace=k8s.io Jun 20 19:13:28.036094 containerd[1476]: time="2025-06-20T19:13:28.036013841Z" level=warning msg="cleaning up after shim disconnected" id=b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c namespace=k8s.io Jun 20 19:13:28.036094 containerd[1476]: time="2025-06-20T19:13:28.036027856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:13:28.051687 systemd[1]: cri-containerd-5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f.scope: Deactivated successfully. Jun 20 19:13:28.073113 containerd[1476]: time="2025-06-20T19:13:28.072893631Z" level=info msg="StopContainer for \"b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c\" returns successfully" Jun 20 19:13:28.073778 containerd[1476]: time="2025-06-20T19:13:28.073730435Z" level=info msg="StopPodSandbox for \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\"" Jun 20 19:13:28.073895 containerd[1476]: time="2025-06-20T19:13:28.073798509Z" level=info msg="Container to stop \"a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:13:28.073895 containerd[1476]: time="2025-06-20T19:13:28.073870614Z" level=info msg="Container to stop \"b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:13:28.073895 containerd[1476]: time="2025-06-20T19:13:28.073886425Z" level=info msg="Container to stop \"a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:13:28.074346 containerd[1476]: time="2025-06-20T19:13:28.073902028Z" level=info msg="Container to stop \"d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:13:28.074346 containerd[1476]: time="2025-06-20T19:13:28.073932256Z" level=info msg="Container to stop \"b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:13:28.080027 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19-shm.mount: Deactivated successfully. Jun 20 19:13:28.091057 systemd[1]: cri-containerd-06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19.scope: Deactivated successfully. Jun 20 19:13:28.111552 containerd[1476]: time="2025-06-20T19:13:28.111468268Z" level=info msg="shim disconnected" id=5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f namespace=k8s.io Jun 20 19:13:28.112113 containerd[1476]: time="2025-06-20T19:13:28.111843788Z" level=warning msg="cleaning up after shim disconnected" id=5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f namespace=k8s.io Jun 20 19:13:28.112113 containerd[1476]: time="2025-06-20T19:13:28.111871829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:13:28.144070 containerd[1476]: time="2025-06-20T19:13:28.143891494Z" level=info msg="TearDown network for sandbox \"5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f\" successfully" Jun 20 19:13:28.144751 containerd[1476]: time="2025-06-20T19:13:28.144336904Z" level=info msg="StopPodSandbox for \"5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f\" returns successfully" Jun 20 19:13:28.147993 containerd[1476]: time="2025-06-20T19:13:28.147105731Z" level=info msg="shim disconnected" id=06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19 namespace=k8s.io Jun 20 19:13:28.147993 containerd[1476]: time="2025-06-20T19:13:28.147509478Z" level=warning msg="cleaning up after shim disconnected" id=06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19 namespace=k8s.io Jun 20 19:13:28.147993 containerd[1476]: time="2025-06-20T19:13:28.147527040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:13:28.174162 containerd[1476]: time="2025-06-20T19:13:28.174092016Z" level=info msg="TearDown network for sandbox \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\" successfully" Jun 20 19:13:28.174162 containerd[1476]: time="2025-06-20T19:13:28.174143228Z" level=info msg="StopPodSandbox for \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\" returns successfully" Jun 20 19:13:28.228608 kubelet[2667]: I0620 19:13:28.228547 2667 scope.go:117] "RemoveContainer" containerID="b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c" Jun 20 19:13:28.233701 containerd[1476]: time="2025-06-20T19:13:28.233149265Z" level=info msg="RemoveContainer for \"b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c\"" Jun 20 19:13:28.241066 containerd[1476]: time="2025-06-20T19:13:28.241011739Z" level=info msg="RemoveContainer for \"b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c\" returns successfully" Jun 20 19:13:28.241394 kubelet[2667]: I0620 19:13:28.241359 2667 scope.go:117] "RemoveContainer" containerID="d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e" Jun 20 19:13:28.242789 containerd[1476]: time="2025-06-20T19:13:28.242753572Z" level=info msg="RemoveContainer for \"d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e\"" Jun 20 19:13:28.247689 containerd[1476]: time="2025-06-20T19:13:28.247624042Z" level=info msg="RemoveContainer for \"d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e\" returns successfully" Jun 20 19:13:28.248085 kubelet[2667]: I0620 19:13:28.247910 2667 scope.go:117] "RemoveContainer" containerID="a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618" Jun 20 19:13:28.249383 containerd[1476]: time="2025-06-20T19:13:28.249343831Z" level=info msg="RemoveContainer for \"a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618\"" Jun 20 19:13:28.253578 containerd[1476]: time="2025-06-20T19:13:28.253518065Z" level=info msg="RemoveContainer for \"a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618\" returns successfully" Jun 20 19:13:28.253863 kubelet[2667]: I0620 19:13:28.253761 2667 scope.go:117] "RemoveContainer" containerID="a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef" Jun 20 19:13:28.255183 containerd[1476]: time="2025-06-20T19:13:28.255138246Z" level=info msg="RemoveContainer for \"a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef\"" Jun 20 19:13:28.257785 kubelet[2667]: I0620 19:13:28.257599 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsjvm\" (UniqueName: \"kubernetes.io/projected/9fb0f7b5-e282-4417-8aff-e06a491718c8-kube-api-access-dsjvm\") pod \"9fb0f7b5-e282-4417-8aff-e06a491718c8\" (UID: \"9fb0f7b5-e282-4417-8aff-e06a491718c8\") " Jun 20 19:13:28.257785 kubelet[2667]: I0620 19:13:28.257662 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fb0f7b5-e282-4417-8aff-e06a491718c8-cilium-config-path\") pod \"9fb0f7b5-e282-4417-8aff-e06a491718c8\" (UID: \"9fb0f7b5-e282-4417-8aff-e06a491718c8\") " Jun 20 19:13:28.260583 containerd[1476]: time="2025-06-20T19:13:28.260529391Z" level=info msg="RemoveContainer for \"a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef\" returns successfully" Jun 20 19:13:28.261410 kubelet[2667]: I0620 19:13:28.261021 2667 scope.go:117] "RemoveContainer" containerID="b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017" Jun 20 19:13:28.263202 containerd[1476]: time="2025-06-20T19:13:28.262796581Z" level=info msg="RemoveContainer for \"b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017\"" Jun 20 19:13:28.264275 kubelet[2667]: I0620 19:13:28.264209 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fb0f7b5-e282-4417-8aff-e06a491718c8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9fb0f7b5-e282-4417-8aff-e06a491718c8" (UID: "9fb0f7b5-e282-4417-8aff-e06a491718c8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:13:28.265672 kubelet[2667]: I0620 19:13:28.265634 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fb0f7b5-e282-4417-8aff-e06a491718c8-kube-api-access-dsjvm" (OuterVolumeSpecName: "kube-api-access-dsjvm") pod "9fb0f7b5-e282-4417-8aff-e06a491718c8" (UID: "9fb0f7b5-e282-4417-8aff-e06a491718c8"). InnerVolumeSpecName "kube-api-access-dsjvm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:13:28.267199 containerd[1476]: time="2025-06-20T19:13:28.267148472Z" level=info msg="RemoveContainer for \"b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017\" returns successfully" Jun 20 19:13:28.267464 kubelet[2667]: I0620 19:13:28.267438 2667 scope.go:117] "RemoveContainer" containerID="b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c" Jun 20 19:13:28.267879 containerd[1476]: time="2025-06-20T19:13:28.267830418Z" level=error msg="ContainerStatus for \"b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c\": not found" Jun 20 19:13:28.268100 kubelet[2667]: E0620 19:13:28.268065 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c\": not found" containerID="b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c" Jun 20 19:13:28.268232 kubelet[2667]: I0620 19:13:28.268114 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c"} err="failed to get container status \"b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6e383274d2afac2bccf9c22882f346420bbef258bad2c2c2fcdfc56f40b265c\": not found" Jun 20 19:13:28.268306 kubelet[2667]: I0620 19:13:28.268239 2667 scope.go:117] "RemoveContainer" containerID="d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e" Jun 20 19:13:28.268599 containerd[1476]: time="2025-06-20T19:13:28.268493646Z" level=error msg="ContainerStatus for \"d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e\": not found" Jun 20 19:13:28.268692 kubelet[2667]: E0620 19:13:28.268624 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e\": not found" containerID="d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e" Jun 20 19:13:28.268692 kubelet[2667]: I0620 19:13:28.268657 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e"} err="failed to get container status \"d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4a64636cf1d0b26a77fee2522c635b900902b80051570965b3806e330de684e\": not found" Jun 20 19:13:28.268692 kubelet[2667]: I0620 19:13:28.268684 2667 scope.go:117] "RemoveContainer" containerID="a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618" Jun 20 19:13:28.269052 containerd[1476]: time="2025-06-20T19:13:28.268896620Z" level=error msg="ContainerStatus for \"a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618\": not found" Jun 20 19:13:28.269265 kubelet[2667]: E0620 19:13:28.269225 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618\": not found" containerID="a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618" Jun 20 19:13:28.269359 kubelet[2667]: I0620 19:13:28.269278 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618"} err="failed to get container status \"a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6cef61a22b7e6f27148fef9f1a8d2a1e96b8b35804f869ff46122e5a87f1618\": not found" Jun 20 19:13:28.269359 kubelet[2667]: I0620 19:13:28.269309 2667 scope.go:117] "RemoveContainer" containerID="a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef" Jun 20 19:13:28.269713 containerd[1476]: time="2025-06-20T19:13:28.269604336Z" level=error msg="ContainerStatus for \"a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef\": not found" Jun 20 19:13:28.269840 kubelet[2667]: E0620 19:13:28.269813 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef\": not found" containerID="a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef" Jun 20 19:13:28.269969 kubelet[2667]: I0620 19:13:28.269877 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef"} err="failed to get container status \"a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef\": rpc error: code = NotFound desc = an error occurred when try to find container \"a2aadacdab4e0334cfb5d08429d1366e1fb27d879fd7fc0c56dbc4f88ea9deef\": not found" Jun 20 19:13:28.269969 kubelet[2667]: I0620 19:13:28.269906 2667 scope.go:117] "RemoveContainer" containerID="b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017" Jun 20 19:13:28.270451 containerd[1476]: time="2025-06-20T19:13:28.270407826Z" level=error msg="ContainerStatus for \"b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017\": not found" Jun 20 19:13:28.270640 kubelet[2667]: E0620 19:13:28.270602 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017\": not found" containerID="b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017" Jun 20 19:13:28.270640 kubelet[2667]: I0620 19:13:28.270645 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017"} err="failed to get container status \"b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017\": rpc error: code = NotFound desc = an error occurred when try to find container \"b790b072c53cab10ef984bd2ad61ac55b2309a58de9c3ad099cea8786d4a3017\": not found" Jun 20 19:13:28.270640 kubelet[2667]: I0620 19:13:28.270670 2667 scope.go:117] "RemoveContainer" containerID="04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf" Jun 20 19:13:28.272381 containerd[1476]: time="2025-06-20T19:13:28.272347512Z" level=info msg="RemoveContainer for \"04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf\"" Jun 20 19:13:28.277663 containerd[1476]: time="2025-06-20T19:13:28.277616674Z" level=info msg="RemoveContainer for \"04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf\" returns successfully" Jun 20 19:13:28.278026 kubelet[2667]: I0620 19:13:28.277995 2667 scope.go:117] "RemoveContainer" containerID="04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf" Jun 20 19:13:28.278377 containerd[1476]: time="2025-06-20T19:13:28.278330100Z" level=error msg="ContainerStatus for \"04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf\": not found" Jun 20 19:13:28.278588 kubelet[2667]: E0620 19:13:28.278541 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf\": not found" containerID="04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf" Jun 20 19:13:28.278683 kubelet[2667]: I0620 19:13:28.278586 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf"} err="failed to get container status \"04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"04684c8627775f01d707ce9ec7fe3ae68aa1421fc9065298525d403d3da749cf\": not found" Jun 20 19:13:28.358644 kubelet[2667]: I0620 19:13:28.358562 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-cni-path\") pod \"47471ebf-3d99-4372-9b55-4baeba3f8df7\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " Jun 20 19:13:28.358644 kubelet[2667]: I0620 19:13:28.358630 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-bpf-maps\") pod \"47471ebf-3d99-4372-9b55-4baeba3f8df7\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " Jun 20 19:13:28.358960 kubelet[2667]: I0620 19:13:28.358666 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-host-proc-sys-net\") pod \"47471ebf-3d99-4372-9b55-4baeba3f8df7\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " Jun 20 19:13:28.358960 kubelet[2667]: I0620 19:13:28.358691 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-xtables-lock\") pod \"47471ebf-3d99-4372-9b55-4baeba3f8df7\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " Jun 20 19:13:28.358960 kubelet[2667]: I0620 19:13:28.358750 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47471ebf-3d99-4372-9b55-4baeba3f8df7-clustermesh-secrets\") pod \"47471ebf-3d99-4372-9b55-4baeba3f8df7\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " Jun 20 19:13:28.358960 kubelet[2667]: I0620 19:13:28.358774 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-host-proc-sys-kernel\") pod \"47471ebf-3d99-4372-9b55-4baeba3f8df7\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " Jun 20 19:13:28.358960 kubelet[2667]: I0620 19:13:28.358803 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47471ebf-3d99-4372-9b55-4baeba3f8df7-cilium-config-path\") pod \"47471ebf-3d99-4372-9b55-4baeba3f8df7\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " Jun 20 19:13:28.358960 kubelet[2667]: I0620 19:13:28.358833 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-hostproc\") pod \"47471ebf-3d99-4372-9b55-4baeba3f8df7\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " Jun 20 19:13:28.359265 kubelet[2667]: I0620 19:13:28.358857 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-lib-modules\") pod \"47471ebf-3d99-4372-9b55-4baeba3f8df7\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " Jun 20 19:13:28.359265 kubelet[2667]: I0620 19:13:28.358882 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-etc-cni-netd\") pod \"47471ebf-3d99-4372-9b55-4baeba3f8df7\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " Jun 20 19:13:28.359265 kubelet[2667]: I0620 19:13:28.358907 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-cilium-run\") pod \"47471ebf-3d99-4372-9b55-4baeba3f8df7\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " Jun 20 19:13:28.359265 kubelet[2667]: I0620 19:13:28.359053 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-cilium-cgroup\") pod \"47471ebf-3d99-4372-9b55-4baeba3f8df7\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " Jun 20 19:13:28.359265 kubelet[2667]: I0620 19:13:28.359086 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47471ebf-3d99-4372-9b55-4baeba3f8df7-hubble-tls\") pod \"47471ebf-3d99-4372-9b55-4baeba3f8df7\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " Jun 20 19:13:28.359265 kubelet[2667]: I0620 19:13:28.359118 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w74h\" (UniqueName: \"kubernetes.io/projected/47471ebf-3d99-4372-9b55-4baeba3f8df7-kube-api-access-4w74h\") pod \"47471ebf-3d99-4372-9b55-4baeba3f8df7\" (UID: \"47471ebf-3d99-4372-9b55-4baeba3f8df7\") " Jun 20 19:13:28.359553 kubelet[2667]: I0620 19:13:28.359190 2667 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dsjvm\" (UniqueName: \"kubernetes.io/projected/9fb0f7b5-e282-4417-8aff-e06a491718c8-kube-api-access-dsjvm\") on node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:28.359553 kubelet[2667]: I0620 19:13:28.359210 2667 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fb0f7b5-e282-4417-8aff-e06a491718c8-cilium-config-path\") on node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:28.363288 kubelet[2667]: I0620 19:13:28.363238 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47471ebf-3d99-4372-9b55-4baeba3f8df7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "47471ebf-3d99-4372-9b55-4baeba3f8df7" (UID: "47471ebf-3d99-4372-9b55-4baeba3f8df7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:13:28.363512 kubelet[2667]: I0620 19:13:28.363487 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-cni-path" (OuterVolumeSpecName: "cni-path") pod "47471ebf-3d99-4372-9b55-4baeba3f8df7" (UID: "47471ebf-3d99-4372-9b55-4baeba3f8df7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:28.363657 kubelet[2667]: I0620 19:13:28.363634 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "47471ebf-3d99-4372-9b55-4baeba3f8df7" (UID: "47471ebf-3d99-4372-9b55-4baeba3f8df7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:28.363753 kubelet[2667]: I0620 19:13:28.363730 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "47471ebf-3d99-4372-9b55-4baeba3f8df7" (UID: "47471ebf-3d99-4372-9b55-4baeba3f8df7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:28.363817 kubelet[2667]: I0620 19:13:28.363666 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47471ebf-3d99-4372-9b55-4baeba3f8df7-kube-api-access-4w74h" (OuterVolumeSpecName: "kube-api-access-4w74h") pod "47471ebf-3d99-4372-9b55-4baeba3f8df7" (UID: "47471ebf-3d99-4372-9b55-4baeba3f8df7"). InnerVolumeSpecName "kube-api-access-4w74h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:13:28.363817 kubelet[2667]: I0620 19:13:28.363697 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-hostproc" (OuterVolumeSpecName: "hostproc") pod "47471ebf-3d99-4372-9b55-4baeba3f8df7" (UID: "47471ebf-3d99-4372-9b55-4baeba3f8df7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:28.363817 kubelet[2667]: I0620 19:13:28.363786 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "47471ebf-3d99-4372-9b55-4baeba3f8df7" (UID: "47471ebf-3d99-4372-9b55-4baeba3f8df7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:28.363817 kubelet[2667]: I0620 19:13:28.363813 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "47471ebf-3d99-4372-9b55-4baeba3f8df7" (UID: "47471ebf-3d99-4372-9b55-4baeba3f8df7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:28.364103 kubelet[2667]: I0620 19:13:28.363842 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "47471ebf-3d99-4372-9b55-4baeba3f8df7" (UID: "47471ebf-3d99-4372-9b55-4baeba3f8df7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:28.364215 kubelet[2667]: I0620 19:13:28.364184 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "47471ebf-3d99-4372-9b55-4baeba3f8df7" (UID: "47471ebf-3d99-4372-9b55-4baeba3f8df7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:28.364383 kubelet[2667]: I0620 19:13:28.364361 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "47471ebf-3d99-4372-9b55-4baeba3f8df7" (UID: "47471ebf-3d99-4372-9b55-4baeba3f8df7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:28.368557 kubelet[2667]: I0620 19:13:28.368501 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47471ebf-3d99-4372-9b55-4baeba3f8df7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "47471ebf-3d99-4372-9b55-4baeba3f8df7" (UID: "47471ebf-3d99-4372-9b55-4baeba3f8df7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:13:28.368695 kubelet[2667]: I0620 19:13:28.368638 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "47471ebf-3d99-4372-9b55-4baeba3f8df7" (UID: "47471ebf-3d99-4372-9b55-4baeba3f8df7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:13:28.369121 kubelet[2667]: I0620 19:13:28.369075 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47471ebf-3d99-4372-9b55-4baeba3f8df7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "47471ebf-3d99-4372-9b55-4baeba3f8df7" (UID: "47471ebf-3d99-4372-9b55-4baeba3f8df7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:13:28.459786 kubelet[2667]: I0620 19:13:28.459616 2667 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-host-proc-sys-kernel\") on node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:28.459786 kubelet[2667]: I0620 19:13:28.459669 2667 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47471ebf-3d99-4372-9b55-4baeba3f8df7-cilium-config-path\") on node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:28.459786 kubelet[2667]: I0620 19:13:28.459689 2667 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-hostproc\") on node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:28.459786 kubelet[2667]: I0620 19:13:28.459705 2667 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-lib-modules\") on node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:28.459786 kubelet[2667]: I0620 19:13:28.459741 2667 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-etc-cni-netd\") on node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:28.459786 kubelet[2667]: I0620 19:13:28.459760 2667 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-cilium-run\") on node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:28.460243 kubelet[2667]: I0620 19:13:28.459794 2667 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47471ebf-3d99-4372-9b55-4baeba3f8df7-hubble-tls\") on node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:28.460243 kubelet[2667]: I0620 19:13:28.459812 2667 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4w74h\" (UniqueName: \"kubernetes.io/projected/47471ebf-3d99-4372-9b55-4baeba3f8df7-kube-api-access-4w74h\") on node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:28.460243 kubelet[2667]: I0620 19:13:28.459828 2667 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-cilium-cgroup\") on node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:28.460243 kubelet[2667]: I0620 19:13:28.459842 2667 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-bpf-maps\") on node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:28.460243 kubelet[2667]: I0620 19:13:28.459863 2667 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-cni-path\") on node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:28.460243 kubelet[2667]: I0620 19:13:28.459876 2667 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-xtables-lock\") on node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:28.460243 kubelet[2667]: I0620 19:13:28.459891 2667 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47471ebf-3d99-4372-9b55-4baeba3f8df7-host-proc-sys-net\") on node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:28.460910 kubelet[2667]: I0620 19:13:28.459907 2667 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47471ebf-3d99-4372-9b55-4baeba3f8df7-clustermesh-secrets\") on node \"ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" DevicePath \"\"" Jun 20 19:13:28.536313 systemd[1]: Removed slice kubepods-burstable-pod47471ebf_3d99_4372_9b55_4baeba3f8df7.slice - libcontainer container kubepods-burstable-pod47471ebf_3d99_4372_9b55_4baeba3f8df7.slice. Jun 20 19:13:28.536864 systemd[1]: kubepods-burstable-pod47471ebf_3d99_4372_9b55_4baeba3f8df7.slice: Consumed 10.011s CPU time, 123.3M memory peak, 136K read from disk, 13.3M written to disk. Jun 20 19:13:28.541727 systemd[1]: Removed slice kubepods-besteffort-pod9fb0f7b5_e282_4417_8aff_e06a491718c8.slice - libcontainer container kubepods-besteffort-pod9fb0f7b5_e282_4417_8aff_e06a491718c8.slice. Jun 20 19:13:28.886524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19-rootfs.mount: Deactivated successfully. Jun 20 19:13:28.886716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f-rootfs.mount: Deactivated successfully. Jun 20 19:13:28.886833 systemd[1]: var-lib-kubelet-pods-47471ebf\x2d3d99\x2d4372\x2d9b55\x2d4baeba3f8df7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 19:13:28.886972 systemd[1]: var-lib-kubelet-pods-47471ebf\x2d3d99\x2d4372\x2d9b55\x2d4baeba3f8df7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 19:13:28.887092 systemd[1]: var-lib-kubelet-pods-9fb0f7b5\x2de282\x2d4417\x2d8aff\x2de06a491718c8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddsjvm.mount: Deactivated successfully. Jun 20 19:13:28.887213 systemd[1]: var-lib-kubelet-pods-47471ebf\x2d3d99\x2d4372\x2d9b55\x2d4baeba3f8df7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4w74h.mount: Deactivated successfully. Jun 20 19:13:29.832831 kubelet[2667]: I0620 19:13:29.832768 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47471ebf-3d99-4372-9b55-4baeba3f8df7" path="/var/lib/kubelet/pods/47471ebf-3d99-4372-9b55-4baeba3f8df7/volumes" Jun 20 19:13:29.834182 kubelet[2667]: I0620 19:13:29.833902 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fb0f7b5-e282-4417-8aff-e06a491718c8" path="/var/lib/kubelet/pods/9fb0f7b5-e282-4417-8aff-e06a491718c8/volumes" Jun 20 19:13:29.840959 sshd[4271]: Connection closed by 147.75.109.163 port 39496 Jun 20 19:13:29.839683 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:29.849406 systemd[1]: sshd@24-10.128.0.67:22-147.75.109.163:39496.service: Deactivated successfully. Jun 20 19:13:29.853692 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 19:13:29.855725 systemd-logind[1458]: Session 25 logged out. Waiting for processes to exit. Jun 20 19:13:29.861453 systemd-logind[1458]: Removed session 25. Jun 20 19:13:29.899338 systemd[1]: Started sshd@25-10.128.0.67:22-147.75.109.163:39500.service - OpenSSH per-connection server daemon (147.75.109.163:39500). Jun 20 19:13:30.207285 sshd[4432]: Accepted publickey for core from 147.75.109.163 port 39500 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:30.208207 sshd-session[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:30.215586 systemd-logind[1458]: New session 26 of user core. Jun 20 19:13:30.220167 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 19:13:30.333689 ntpd[1439]: Deleting interface #12 lxc_health, fe80::805e:b4ff:fe68:b2d2%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Jun 20 19:13:30.334206 ntpd[1439]: 20 Jun 19:13:30 ntpd[1439]: Deleting interface #12 lxc_health, fe80::805e:b4ff:fe68:b2d2%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Jun 20 19:13:31.015696 kubelet[2667]: E0620 19:13:31.015588 2667 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:13:31.456619 kubelet[2667]: I0620 19:13:31.454378 2667 memory_manager.go:355] "RemoveStaleState removing state" podUID="9fb0f7b5-e282-4417-8aff-e06a491718c8" containerName="cilium-operator" Jun 20 19:13:31.456619 kubelet[2667]: I0620 19:13:31.456043 2667 memory_manager.go:355] "RemoveStaleState removing state" podUID="47471ebf-3d99-4372-9b55-4baeba3f8df7" containerName="cilium-agent" Jun 20 19:13:31.470484 sshd[4434]: Connection closed by 147.75.109.163 port 39500 Jun 20 19:13:31.472174 sshd-session[4432]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:31.476749 systemd[1]: Created slice kubepods-burstable-podc1d270b0_a984_49ec_b64e_5ad988b111f9.slice - libcontainer container kubepods-burstable-podc1d270b0_a984_49ec_b64e_5ad988b111f9.slice. Jun 20 19:13:31.483465 systemd[1]: sshd@25-10.128.0.67:22-147.75.109.163:39500.service: Deactivated successfully. Jun 20 19:13:31.485320 kubelet[2667]: I0620 19:13:31.485276 2667 status_manager.go:890] "Failed to get status for pod" podUID="c1d270b0-a984-49ec-b64e-5ad988b111f9" pod="kube-system/cilium-hlxbb" err="pods \"cilium-hlxbb\" is forbidden: User \"system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object" Jun 20 19:13:31.485900 kubelet[2667]: W0620 19:13:31.485871 2667 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object Jun 20 19:13:31.486164 kubelet[2667]: E0620 19:13:31.486099 2667 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object" logger="UnhandledError" Jun 20 19:13:31.486899 kubelet[2667]: W0620 19:13:31.486775 2667 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object Jun 20 19:13:31.486899 kubelet[2667]: E0620 19:13:31.486816 2667 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object" logger="UnhandledError" Jun 20 19:13:31.489986 kubelet[2667]: W0620 19:13:31.488440 2667 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object Jun 20 19:13:31.490278 kubelet[2667]: E0620 19:13:31.490233 2667 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object" logger="UnhandledError" Jun 20 19:13:31.490406 kubelet[2667]: W0620 19:13:31.490305 2667 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object Jun 20 19:13:31.490406 kubelet[2667]: E0620 19:13:31.490341 2667 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal' and this object" logger="UnhandledError" Jun 20 19:13:31.491777 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 19:13:31.493012 systemd[1]: session-26.scope: Consumed 1.024s CPU time, 26M memory peak. Jun 20 19:13:31.495360 systemd-logind[1458]: Session 26 logged out. Waiting for processes to exit. Jun 20 19:13:31.499945 systemd-logind[1458]: Removed session 26. Jun 20 19:13:31.539081 systemd[1]: Started sshd@26-10.128.0.67:22-147.75.109.163:39510.service - OpenSSH per-connection server daemon (147.75.109.163:39510). Jun 20 19:13:31.580120 kubelet[2667]: I0620 19:13:31.579888 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1d270b0-a984-49ec-b64e-5ad988b111f9-cni-path\") pod \"cilium-hlxbb\" (UID: \"c1d270b0-a984-49ec-b64e-5ad988b111f9\") " pod="kube-system/cilium-hlxbb" Jun 20 19:13:31.580120 kubelet[2667]: I0620 19:13:31.579973 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1d270b0-a984-49ec-b64e-5ad988b111f9-etc-cni-netd\") pod \"cilium-hlxbb\" (UID: \"c1d270b0-a984-49ec-b64e-5ad988b111f9\") " pod="kube-system/cilium-hlxbb" Jun 20 19:13:31.580120 kubelet[2667]: I0620 19:13:31.580006 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1d270b0-a984-49ec-b64e-5ad988b111f9-bpf-maps\") pod \"cilium-hlxbb\" (UID: \"c1d270b0-a984-49ec-b64e-5ad988b111f9\") " pod="kube-system/cilium-hlxbb" Jun 20 19:13:31.580120 kubelet[2667]: I0620 19:13:31.580035 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1d270b0-a984-49ec-b64e-5ad988b111f9-lib-modules\") pod \"cilium-hlxbb\" (UID: \"c1d270b0-a984-49ec-b64e-5ad988b111f9\") " pod="kube-system/cilium-hlxbb" Jun 20 19:13:31.580120 kubelet[2667]: I0620 19:13:31.580064 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1d270b0-a984-49ec-b64e-5ad988b111f9-host-proc-sys-net\") pod \"cilium-hlxbb\" (UID: \"c1d270b0-a984-49ec-b64e-5ad988b111f9\") " pod="kube-system/cilium-hlxbb" Jun 20 19:13:31.580120 kubelet[2667]: I0620 19:13:31.580090 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1d270b0-a984-49ec-b64e-5ad988b111f9-xtables-lock\") pod \"cilium-hlxbb\" (UID: \"c1d270b0-a984-49ec-b64e-5ad988b111f9\") " pod="kube-system/cilium-hlxbb" Jun 20 19:13:31.580562 kubelet[2667]: I0620 19:13:31.580118 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1d270b0-a984-49ec-b64e-5ad988b111f9-cilium-config-path\") pod \"cilium-hlxbb\" (UID: \"c1d270b0-a984-49ec-b64e-5ad988b111f9\") " pod="kube-system/cilium-hlxbb" Jun 20 19:13:31.580562 kubelet[2667]: I0620 19:13:31.580148 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4pwq\" (UniqueName: \"kubernetes.io/projected/c1d270b0-a984-49ec-b64e-5ad988b111f9-kube-api-access-l4pwq\") pod \"cilium-hlxbb\" (UID: \"c1d270b0-a984-49ec-b64e-5ad988b111f9\") " pod="kube-system/cilium-hlxbb" Jun 20 19:13:31.580562 kubelet[2667]: I0620 19:13:31.580182 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1d270b0-a984-49ec-b64e-5ad988b111f9-host-proc-sys-kernel\") pod \"cilium-hlxbb\" (UID: \"c1d270b0-a984-49ec-b64e-5ad988b111f9\") " pod="kube-system/cilium-hlxbb" Jun 20 19:13:31.580562 kubelet[2667]: I0620 19:13:31.580205 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1d270b0-a984-49ec-b64e-5ad988b111f9-hubble-tls\") pod \"cilium-hlxbb\" (UID: \"c1d270b0-a984-49ec-b64e-5ad988b111f9\") " pod="kube-system/cilium-hlxbb" Jun 20 19:13:31.580562 kubelet[2667]: I0620 19:13:31.580229 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1d270b0-a984-49ec-b64e-5ad988b111f9-clustermesh-secrets\") pod \"cilium-hlxbb\" (UID: \"c1d270b0-a984-49ec-b64e-5ad988b111f9\") " pod="kube-system/cilium-hlxbb" Jun 20 19:13:31.580843 kubelet[2667]: I0620 19:13:31.580258 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1d270b0-a984-49ec-b64e-5ad988b111f9-cilium-run\") pod \"cilium-hlxbb\" (UID: \"c1d270b0-a984-49ec-b64e-5ad988b111f9\") " pod="kube-system/cilium-hlxbb" Jun 20 19:13:31.580843 kubelet[2667]: I0620 19:13:31.580281 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1d270b0-a984-49ec-b64e-5ad988b111f9-cilium-cgroup\") pod \"cilium-hlxbb\" (UID: \"c1d270b0-a984-49ec-b64e-5ad988b111f9\") " pod="kube-system/cilium-hlxbb" Jun 20 19:13:31.580843 kubelet[2667]: I0620 19:13:31.580306 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c1d270b0-a984-49ec-b64e-5ad988b111f9-cilium-ipsec-secrets\") pod \"cilium-hlxbb\" (UID: \"c1d270b0-a984-49ec-b64e-5ad988b111f9\") " pod="kube-system/cilium-hlxbb" Jun 20 19:13:31.580843 kubelet[2667]: I0620 19:13:31.580341 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1d270b0-a984-49ec-b64e-5ad988b111f9-hostproc\") pod \"cilium-hlxbb\" (UID: \"c1d270b0-a984-49ec-b64e-5ad988b111f9\") " pod="kube-system/cilium-hlxbb" Jun 20 19:13:31.849083 sshd[4445]: Accepted publickey for core from 147.75.109.163 port 39510 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:31.851023 sshd-session[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:31.859025 systemd-logind[1458]: New session 27 of user core. Jun 20 19:13:31.864491 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 19:13:32.058485 sshd[4448]: Connection closed by 147.75.109.163 port 39510 Jun 20 19:13:32.059470 sshd-session[4445]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:32.066233 systemd[1]: sshd@26-10.128.0.67:22-147.75.109.163:39510.service: Deactivated successfully. Jun 20 19:13:32.069711 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 19:13:32.071274 systemd-logind[1458]: Session 27 logged out. Waiting for processes to exit. Jun 20 19:13:32.073370 systemd-logind[1458]: Removed session 27. Jun 20 19:13:32.118617 systemd[1]: Started sshd@27-10.128.0.67:22-147.75.109.163:39516.service - OpenSSH per-connection server daemon (147.75.109.163:39516). Jun 20 19:13:32.420075 sshd[4455]: Accepted publickey for core from 147.75.109.163 port 39516 ssh2: RSA SHA256:Z4N9j26NeFB/dOvfDCiL/OdHq6n9pj3QIZXGG6vuNCI Jun 20 19:13:32.422050 sshd-session[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:13:32.429074 systemd-logind[1458]: New session 28 of user core. Jun 20 19:13:32.438208 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 20 19:13:32.682770 kubelet[2667]: E0620 19:13:32.682199 2667 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jun 20 19:13:32.682770 kubelet[2667]: E0620 19:13:32.682335 2667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1d270b0-a984-49ec-b64e-5ad988b111f9-clustermesh-secrets podName:c1d270b0-a984-49ec-b64e-5ad988b111f9 nodeName:}" failed. No retries permitted until 2025-06-20 19:13:33.182306148 +0000 UTC m=+107.566318886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/c1d270b0-a984-49ec-b64e-5ad988b111f9-clustermesh-secrets") pod "cilium-hlxbb" (UID: "c1d270b0-a984-49ec-b64e-5ad988b111f9") : failed to sync secret cache: timed out waiting for the condition Jun 20 19:13:32.685329 kubelet[2667]: E0620 19:13:32.685058 2667 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jun 20 19:13:32.685329 kubelet[2667]: E0620 19:13:32.685100 2667 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-hlxbb: failed to sync secret cache: timed out waiting for the condition Jun 20 19:13:32.685329 kubelet[2667]: E0620 19:13:32.685188 2667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1d270b0-a984-49ec-b64e-5ad988b111f9-hubble-tls podName:c1d270b0-a984-49ec-b64e-5ad988b111f9 nodeName:}" failed. No retries permitted until 2025-06-20 19:13:33.185160184 +0000 UTC m=+107.569172920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/c1d270b0-a984-49ec-b64e-5ad988b111f9-hubble-tls") pod "cilium-hlxbb" (UID: "c1d270b0-a984-49ec-b64e-5ad988b111f9") : failed to sync secret cache: timed out waiting for the condition Jun 20 19:13:33.285373 containerd[1476]: time="2025-06-20T19:13:33.285146163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hlxbb,Uid:c1d270b0-a984-49ec-b64e-5ad988b111f9,Namespace:kube-system,Attempt:0,}" Jun 20 19:13:33.322968 containerd[1476]: time="2025-06-20T19:13:33.322773345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:13:33.323291 containerd[1476]: time="2025-06-20T19:13:33.323000022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:13:33.323291 containerd[1476]: time="2025-06-20T19:13:33.323040282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:13:33.324189 containerd[1476]: time="2025-06-20T19:13:33.323215940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:13:33.356213 systemd[1]: Started cri-containerd-6422d6a98634ea2f325f72c5e35ab505b2cc3f58bbbf0bc0bd7a715b01080927.scope - libcontainer container 6422d6a98634ea2f325f72c5e35ab505b2cc3f58bbbf0bc0bd7a715b01080927. Jun 20 19:13:33.392902 containerd[1476]: time="2025-06-20T19:13:33.392763661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hlxbb,Uid:c1d270b0-a984-49ec-b64e-5ad988b111f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"6422d6a98634ea2f325f72c5e35ab505b2cc3f58bbbf0bc0bd7a715b01080927\"" Jun 20 19:13:33.398444 containerd[1476]: time="2025-06-20T19:13:33.398364447Z" level=info msg="CreateContainer within sandbox \"6422d6a98634ea2f325f72c5e35ab505b2cc3f58bbbf0bc0bd7a715b01080927\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:13:33.419529 containerd[1476]: time="2025-06-20T19:13:33.419470217Z" level=info msg="CreateContainer within sandbox \"6422d6a98634ea2f325f72c5e35ab505b2cc3f58bbbf0bc0bd7a715b01080927\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a88ccc38b509ab3374b2210dc54462cca31a36a7188446455fe1cb7485ed877d\"" Jun 20 19:13:33.420381 containerd[1476]: time="2025-06-20T19:13:33.420339170Z" level=info msg="StartContainer for \"a88ccc38b509ab3374b2210dc54462cca31a36a7188446455fe1cb7485ed877d\"" Jun 20 19:13:33.458281 systemd[1]: Started cri-containerd-a88ccc38b509ab3374b2210dc54462cca31a36a7188446455fe1cb7485ed877d.scope - libcontainer container a88ccc38b509ab3374b2210dc54462cca31a36a7188446455fe1cb7485ed877d. Jun 20 19:13:33.501577 containerd[1476]: time="2025-06-20T19:13:33.501405077Z" level=info msg="StartContainer for \"a88ccc38b509ab3374b2210dc54462cca31a36a7188446455fe1cb7485ed877d\" returns successfully" Jun 20 19:13:33.513136 systemd[1]: cri-containerd-a88ccc38b509ab3374b2210dc54462cca31a36a7188446455fe1cb7485ed877d.scope: Deactivated successfully. Jun 20 19:13:33.561944 containerd[1476]: time="2025-06-20T19:13:33.561674681Z" level=info msg="shim disconnected" id=a88ccc38b509ab3374b2210dc54462cca31a36a7188446455fe1cb7485ed877d namespace=k8s.io Jun 20 19:13:33.561944 containerd[1476]: time="2025-06-20T19:13:33.561805013Z" level=warning msg="cleaning up after shim disconnected" id=a88ccc38b509ab3374b2210dc54462cca31a36a7188446455fe1cb7485ed877d namespace=k8s.io Jun 20 19:13:33.561944 containerd[1476]: time="2025-06-20T19:13:33.561826975Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:13:34.264456 containerd[1476]: time="2025-06-20T19:13:34.264266545Z" level=info msg="CreateContainer within sandbox \"6422d6a98634ea2f325f72c5e35ab505b2cc3f58bbbf0bc0bd7a715b01080927\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:13:34.290685 containerd[1476]: time="2025-06-20T19:13:34.290512749Z" level=info msg="CreateContainer within sandbox \"6422d6a98634ea2f325f72c5e35ab505b2cc3f58bbbf0bc0bd7a715b01080927\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7537a971abd47a4ccab627047993d95f5ed12542d5d41385906921bc5e9ff763\"" Jun 20 19:13:34.292563 containerd[1476]: time="2025-06-20T19:13:34.291403638Z" level=info msg="StartContainer for \"7537a971abd47a4ccab627047993d95f5ed12542d5d41385906921bc5e9ff763\"" Jun 20 19:13:34.339177 systemd[1]: Started cri-containerd-7537a971abd47a4ccab627047993d95f5ed12542d5d41385906921bc5e9ff763.scope - libcontainer container 7537a971abd47a4ccab627047993d95f5ed12542d5d41385906921bc5e9ff763. Jun 20 19:13:34.379426 containerd[1476]: time="2025-06-20T19:13:34.379363546Z" level=info msg="StartContainer for \"7537a971abd47a4ccab627047993d95f5ed12542d5d41385906921bc5e9ff763\" returns successfully" Jun 20 19:13:34.387776 systemd[1]: cri-containerd-7537a971abd47a4ccab627047993d95f5ed12542d5d41385906921bc5e9ff763.scope: Deactivated successfully. Jun 20 19:13:34.427145 containerd[1476]: time="2025-06-20T19:13:34.427065393Z" level=info msg="shim disconnected" id=7537a971abd47a4ccab627047993d95f5ed12542d5d41385906921bc5e9ff763 namespace=k8s.io Jun 20 19:13:34.427145 containerd[1476]: time="2025-06-20T19:13:34.427143549Z" level=warning msg="cleaning up after shim disconnected" id=7537a971abd47a4ccab627047993d95f5ed12542d5d41385906921bc5e9ff763 namespace=k8s.io Jun 20 19:13:34.427484 containerd[1476]: time="2025-06-20T19:13:34.427156833Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:13:34.446975 containerd[1476]: time="2025-06-20T19:13:34.446846779Z" level=warning msg="cleanup warnings time=\"2025-06-20T19:13:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 19:13:35.201599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7537a971abd47a4ccab627047993d95f5ed12542d5d41385906921bc5e9ff763-rootfs.mount: Deactivated successfully. Jun 20 19:13:35.269065 containerd[1476]: time="2025-06-20T19:13:35.268728251Z" level=info msg="CreateContainer within sandbox \"6422d6a98634ea2f325f72c5e35ab505b2cc3f58bbbf0bc0bd7a715b01080927\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:13:35.294790 containerd[1476]: time="2025-06-20T19:13:35.294723244Z" level=info msg="CreateContainer within sandbox \"6422d6a98634ea2f325f72c5e35ab505b2cc3f58bbbf0bc0bd7a715b01080927\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"def883d1f721305aab9c78089fea7cd22b1d7236a00d6ed62a84067f79c0c027\"" Jun 20 19:13:35.297945 containerd[1476]: time="2025-06-20T19:13:35.296291596Z" level=info msg="StartContainer for \"def883d1f721305aab9c78089fea7cd22b1d7236a00d6ed62a84067f79c0c027\"" Jun 20 19:13:35.351160 systemd[1]: Started cri-containerd-def883d1f721305aab9c78089fea7cd22b1d7236a00d6ed62a84067f79c0c027.scope - libcontainer container def883d1f721305aab9c78089fea7cd22b1d7236a00d6ed62a84067f79c0c027. Jun 20 19:13:35.404969 containerd[1476]: time="2025-06-20T19:13:35.404471497Z" level=info msg="StartContainer for \"def883d1f721305aab9c78089fea7cd22b1d7236a00d6ed62a84067f79c0c027\" returns successfully" Jun 20 19:13:35.412012 systemd[1]: cri-containerd-def883d1f721305aab9c78089fea7cd22b1d7236a00d6ed62a84067f79c0c027.scope: Deactivated successfully. Jun 20 19:13:35.456692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-def883d1f721305aab9c78089fea7cd22b1d7236a00d6ed62a84067f79c0c027-rootfs.mount: Deactivated successfully. Jun 20 19:13:35.462299 containerd[1476]: time="2025-06-20T19:13:35.462214058Z" level=info msg="shim disconnected" id=def883d1f721305aab9c78089fea7cd22b1d7236a00d6ed62a84067f79c0c027 namespace=k8s.io Jun 20 19:13:35.462299 containerd[1476]: time="2025-06-20T19:13:35.462285058Z" level=warning msg="cleaning up after shim disconnected" id=def883d1f721305aab9c78089fea7cd22b1d7236a00d6ed62a84067f79c0c027 namespace=k8s.io Jun 20 19:13:35.462299 containerd[1476]: time="2025-06-20T19:13:35.462301897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:13:36.017431 kubelet[2667]: E0620 19:13:36.017358 2667 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:13:36.274610 containerd[1476]: time="2025-06-20T19:13:36.274325890Z" level=info msg="CreateContainer within sandbox \"6422d6a98634ea2f325f72c5e35ab505b2cc3f58bbbf0bc0bd7a715b01080927\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:13:36.298110 containerd[1476]: time="2025-06-20T19:13:36.298054095Z" level=info msg="CreateContainer within sandbox \"6422d6a98634ea2f325f72c5e35ab505b2cc3f58bbbf0bc0bd7a715b01080927\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8dbebad2e1407808fa0252110b3b49fede8c95e0f7d7ec857f3b5c7aabfcab87\"" Jun 20 19:13:36.301119 containerd[1476]: time="2025-06-20T19:13:36.299765416Z" level=info msg="StartContainer for \"8dbebad2e1407808fa0252110b3b49fede8c95e0f7d7ec857f3b5c7aabfcab87\"" Jun 20 19:13:36.361558 systemd[1]: run-containerd-runc-k8s.io-8dbebad2e1407808fa0252110b3b49fede8c95e0f7d7ec857f3b5c7aabfcab87-runc.x10O1D.mount: Deactivated successfully. Jun 20 19:13:36.373192 systemd[1]: Started cri-containerd-8dbebad2e1407808fa0252110b3b49fede8c95e0f7d7ec857f3b5c7aabfcab87.scope - libcontainer container 8dbebad2e1407808fa0252110b3b49fede8c95e0f7d7ec857f3b5c7aabfcab87. Jun 20 19:13:36.415968 systemd[1]: cri-containerd-8dbebad2e1407808fa0252110b3b49fede8c95e0f7d7ec857f3b5c7aabfcab87.scope: Deactivated successfully. Jun 20 19:13:36.422883 containerd[1476]: time="2025-06-20T19:13:36.422161816Z" level=info msg="StartContainer for \"8dbebad2e1407808fa0252110b3b49fede8c95e0f7d7ec857f3b5c7aabfcab87\" returns successfully" Jun 20 19:13:36.463407 containerd[1476]: time="2025-06-20T19:13:36.463327739Z" level=info msg="shim disconnected" id=8dbebad2e1407808fa0252110b3b49fede8c95e0f7d7ec857f3b5c7aabfcab87 namespace=k8s.io Jun 20 19:13:36.463407 containerd[1476]: time="2025-06-20T19:13:36.463407186Z" level=warning msg="cleaning up after shim disconnected" id=8dbebad2e1407808fa0252110b3b49fede8c95e0f7d7ec857f3b5c7aabfcab87 namespace=k8s.io Jun 20 19:13:36.463874 containerd[1476]: time="2025-06-20T19:13:36.463421343Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:13:37.296127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dbebad2e1407808fa0252110b3b49fede8c95e0f7d7ec857f3b5c7aabfcab87-rootfs.mount: Deactivated successfully. Jun 20 19:13:37.299091 containerd[1476]: time="2025-06-20T19:13:37.298548395Z" level=info msg="CreateContainer within sandbox \"6422d6a98634ea2f325f72c5e35ab505b2cc3f58bbbf0bc0bd7a715b01080927\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:13:37.329093 containerd[1476]: time="2025-06-20T19:13:37.329026202Z" level=info msg="CreateContainer within sandbox \"6422d6a98634ea2f325f72c5e35ab505b2cc3f58bbbf0bc0bd7a715b01080927\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d527e7c226a1886003a33a34366c8717c89b82bf3b47a2bfd63e4b12763a5d2e\"" Jun 20 19:13:37.331840 containerd[1476]: time="2025-06-20T19:13:37.331785380Z" level=info msg="StartContainer for \"d527e7c226a1886003a33a34366c8717c89b82bf3b47a2bfd63e4b12763a5d2e\"" Jun 20 19:13:37.387200 systemd[1]: Started cri-containerd-d527e7c226a1886003a33a34366c8717c89b82bf3b47a2bfd63e4b12763a5d2e.scope - libcontainer container d527e7c226a1886003a33a34366c8717c89b82bf3b47a2bfd63e4b12763a5d2e. Jun 20 19:13:37.432787 containerd[1476]: time="2025-06-20T19:13:37.430213990Z" level=info msg="StartContainer for \"d527e7c226a1886003a33a34366c8717c89b82bf3b47a2bfd63e4b12763a5d2e\" returns successfully" Jun 20 19:13:37.975993 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jun 20 19:13:38.292217 systemd[1]: run-containerd-runc-k8s.io-d527e7c226a1886003a33a34366c8717c89b82bf3b47a2bfd63e4b12763a5d2e-runc.5uqKA4.mount: Deactivated successfully. Jun 20 19:13:38.543836 kubelet[2667]: I0620 19:13:38.543633 2667 setters.go:602] "Node became not ready" node="ci-4230-2-0-d6c536ca565e7bf83b2b.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T19:13:38Z","lastTransitionTime":"2025-06-20T19:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 19:13:40.828588 kubelet[2667]: E0620 19:13:40.828509 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-wnrpc" podUID="043e54ac-30cd-45fa-8a15-010c55524474" Jun 20 19:13:41.353113 systemd-networkd[1367]: lxc_health: Link UP Jun 20 19:13:41.367150 systemd-networkd[1367]: lxc_health: Gained carrier Jun 20 19:13:41.406843 kubelet[2667]: I0620 19:13:41.406614 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hlxbb" podStartSLOduration=10.406586627 podStartE2EDuration="10.406586627s" podCreationTimestamp="2025-06-20 19:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:13:38.310252877 +0000 UTC m=+112.694265618" watchObservedRunningTime="2025-06-20 19:13:41.406586627 +0000 UTC m=+115.790599373" Jun 20 19:13:43.343821 systemd-networkd[1367]: lxc_health: Gained IPv6LL Jun 20 19:13:45.873731 containerd[1476]: time="2025-06-20T19:13:45.873470202Z" level=info msg="StopPodSandbox for \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\"" Jun 20 19:13:45.873731 containerd[1476]: time="2025-06-20T19:13:45.873630429Z" level=info msg="TearDown network for sandbox \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\" successfully" Jun 20 19:13:45.873731 containerd[1476]: time="2025-06-20T19:13:45.873652509Z" level=info msg="StopPodSandbox for \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\" returns successfully" Jun 20 19:13:45.876017 containerd[1476]: time="2025-06-20T19:13:45.875265823Z" level=info msg="RemovePodSandbox for \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\"" Jun 20 19:13:45.876017 containerd[1476]: time="2025-06-20T19:13:45.875314024Z" level=info msg="Forcibly stopping sandbox \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\"" Jun 20 19:13:45.876017 containerd[1476]: time="2025-06-20T19:13:45.875413568Z" level=info msg="TearDown network for sandbox \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\" successfully" Jun 20 19:13:45.882972 containerd[1476]: time="2025-06-20T19:13:45.882171510Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 19:13:45.882972 containerd[1476]: time="2025-06-20T19:13:45.882302746Z" level=info msg="RemovePodSandbox \"06e238c59e577a248fcccb034f6448d3a3c1bf2525dde2e9a0faf38c67ad6b19\" returns successfully" Jun 20 19:13:45.884280 containerd[1476]: time="2025-06-20T19:13:45.883929067Z" level=info msg="StopPodSandbox for \"5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f\"" Jun 20 19:13:45.884280 containerd[1476]: time="2025-06-20T19:13:45.884078081Z" level=info msg="TearDown network for sandbox \"5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f\" successfully" Jun 20 19:13:45.884280 containerd[1476]: time="2025-06-20T19:13:45.884096346Z" level=info msg="StopPodSandbox for \"5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f\" returns successfully" Jun 20 19:13:45.885630 containerd[1476]: time="2025-06-20T19:13:45.885357546Z" level=info msg="RemovePodSandbox for \"5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f\"" Jun 20 19:13:45.885630 containerd[1476]: time="2025-06-20T19:13:45.885421716Z" level=info msg="Forcibly stopping sandbox \"5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f\"" Jun 20 19:13:45.885630 containerd[1476]: time="2025-06-20T19:13:45.885518513Z" level=info msg="TearDown network for sandbox \"5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f\" successfully" Jun 20 19:13:45.892136 containerd[1476]: time="2025-06-20T19:13:45.891607279Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 19:13:45.892136 containerd[1476]: time="2025-06-20T19:13:45.891694871Z" level=info msg="RemovePodSandbox \"5d10ee3f3705a0070ef0e90525655c44d36d1b01f54a100f5b16499aad955f9f\" returns successfully" Jun 20 19:13:46.333796 ntpd[1439]: Listen normally on 15 lxc_health [fe80::b072:42ff:fec9:8bf3%14]:123 Jun 20 19:13:46.334498 ntpd[1439]: 20 Jun 19:13:46 ntpd[1439]: Listen normally on 15 lxc_health [fe80::b072:42ff:fec9:8bf3%14]:123 Jun 20 19:13:47.908154 sshd[4458]: Connection closed by 147.75.109.163 port 39516 Jun 20 19:13:47.909281 sshd-session[4455]: pam_unix(sshd:session): session closed for user core Jun 20 19:13:47.916540 systemd[1]: sshd@27-10.128.0.67:22-147.75.109.163:39516.service: Deactivated successfully. Jun 20 19:13:47.920500 systemd[1]: session-28.scope: Deactivated successfully. Jun 20 19:13:47.922482 systemd-logind[1458]: Session 28 logged out. Waiting for processes to exit. Jun 20 19:13:47.924764 systemd-logind[1458]: Removed session 28.