Aug 13 00:51:33.020318 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 00:51:33.020343 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:51:33.020353 kernel: BIOS-provided physical RAM map: Aug 13 00:51:33.020361 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:51:33.020368 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Aug 13 00:51:33.020375 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Aug 13 00:51:33.020387 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Aug 13 00:51:33.020395 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Aug 13 00:51:33.020402 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Aug 13 00:51:33.020410 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Aug 13 00:51:33.020416 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Aug 13 00:51:33.020423 kernel: printk: bootconsole [earlyser0] enabled Aug 13 00:51:33.020431 kernel: NX (Execute Disable) protection: active Aug 13 00:51:33.020436 kernel: efi: EFI v2.70 by Microsoft Aug 13 00:51:33.020449 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Aug 13 00:51:33.020455 kernel: random: crng init done Aug 13 00:51:33.020463 kernel: SMBIOS 3.1.0 present. Aug 13 00:51:33.020471 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Aug 13 00:51:33.020477 kernel: Hypervisor detected: Microsoft Hyper-V Aug 13 00:51:33.020487 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Aug 13 00:51:33.020494 kernel: Hyper-V Host Build:20348-10.0-1-0.1827 Aug 13 00:51:33.020501 kernel: Hyper-V: Nested features: 0x1e0101 Aug 13 00:51:33.020511 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Aug 13 00:51:33.020517 kernel: Hyper-V: Using hypercall for remote TLB flush Aug 13 00:51:33.020527 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 13 00:51:33.020533 kernel: tsc: Marking TSC unstable due to running on Hyper-V Aug 13 00:51:33.020542 kernel: tsc: Detected 2593.905 MHz processor Aug 13 00:51:33.020550 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:51:33.020557 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:51:33.020566 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Aug 13 00:51:33.020574 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:51:33.020582 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Aug 13 00:51:33.020594 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Aug 13 00:51:33.020602 kernel: Using GB pages for direct mapping Aug 13 00:51:33.020611 kernel: Secure boot disabled Aug 13 00:51:33.020621 kernel: ACPI: Early table checksum verification disabled Aug 13 00:51:33.020631 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Aug 13 00:51:33.020657 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:33.020669 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:33.020680 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Aug 13 00:51:33.020698 kernel: ACPI: FACS 0x000000003FFFE000 000040 Aug 13 00:51:33.020711 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:33.020723 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:33.020736 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:33.020749 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:33.020762 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:33.020778 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:33.020790 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:51:33.020803 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Aug 13 00:51:33.020817 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Aug 13 00:51:33.020830 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Aug 13 00:51:33.020842 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Aug 13 00:51:33.020856 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Aug 13 00:51:33.020869 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Aug 13 00:51:33.020884 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Aug 13 00:51:33.020897 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Aug 13 00:51:33.020910 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Aug 13 00:51:33.020945 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Aug 13 00:51:33.020957 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 00:51:33.020969 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 00:51:33.020982 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Aug 13 00:51:33.020995 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Aug 13 00:51:33.021008 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Aug 13 00:51:33.021025 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Aug 13 00:51:33.021038 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Aug 13 00:51:33.021051 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Aug 13 00:51:33.021064 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Aug 13 00:51:33.021077 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Aug 13 00:51:33.021090 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Aug 13 00:51:33.021103 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Aug 13 00:51:33.021117 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Aug 13 00:51:33.021130 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Aug 13 00:51:33.021145 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Aug 13 00:51:33.021158 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Aug 13 00:51:33.021171 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Aug 13 00:51:33.021184 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Aug 13 00:51:33.021197 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Aug 13 00:51:33.021210 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Aug 13 00:51:33.021223 kernel: Zone ranges: Aug 13 00:51:33.021236 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:51:33.021249 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 00:51:33.021265 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 00:51:33.021278 kernel: Movable zone start for each node Aug 13 00:51:33.021291 kernel: Early memory node ranges Aug 13 00:51:33.021304 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 00:51:33.021317 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Aug 13 00:51:33.021330 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Aug 13 00:51:33.021343 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 00:51:33.021356 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Aug 13 00:51:33.021369 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:51:33.021384 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 00:51:33.021397 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Aug 13 00:51:33.021409 kernel: ACPI: PM-Timer IO Port: 0x408 Aug 13 00:51:33.021422 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Aug 13 00:51:33.021435 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:51:33.021448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:51:33.021461 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:51:33.021474 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Aug 13 00:51:33.021486 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 00:51:33.021501 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Aug 13 00:51:33.021514 kernel: Booting paravirtualized kernel on Hyper-V Aug 13 00:51:33.021527 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:51:33.021540 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:51:33.021553 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Aug 13 00:51:33.021566 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Aug 13 00:51:33.021578 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:51:33.021590 kernel: Hyper-V: PV spinlocks enabled Aug 13 00:51:33.021603 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:51:33.021618 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Aug 13 00:51:33.021631 kernel: Policy zone: Normal Aug 13 00:51:33.021646 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:51:33.021659 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:51:33.021672 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 00:51:33.021685 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:51:33.021699 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:51:33.021712 kernel: Memory: 8079144K/8387460K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 308056K reserved, 0K cma-reserved) Aug 13 00:51:33.021728 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:51:33.021741 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 00:51:33.021762 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 00:51:33.021778 kernel: rcu: Hierarchical RCU implementation. Aug 13 00:51:33.021793 kernel: rcu: RCU event tracing is enabled. Aug 13 00:51:33.021807 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:51:33.021820 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:51:33.021834 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:51:33.021848 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:51:33.021861 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:51:33.021875 kernel: Using NULL legacy PIC Aug 13 00:51:33.021891 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Aug 13 00:51:33.021904 kernel: Console: colour dummy device 80x25 Aug 13 00:51:33.021918 kernel: printk: console [tty1] enabled Aug 13 00:51:33.035738 kernel: printk: console [ttyS0] enabled Aug 13 00:51:33.035757 kernel: printk: bootconsole [earlyser0] disabled Aug 13 00:51:33.035776 kernel: ACPI: Core revision 20210730 Aug 13 00:51:33.035791 kernel: Failed to register legacy timer interrupt Aug 13 00:51:33.035805 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:51:33.035819 kernel: Hyper-V: Using IPI hypercalls Aug 13 00:51:33.035833 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Aug 13 00:51:33.035847 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 00:51:33.035862 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 00:51:33.035876 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:51:33.035890 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:51:33.035904 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:51:33.035920 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 13 00:51:33.035942 kernel: RETBleed: Vulnerable Aug 13 00:51:33.035956 kernel: Speculative Store Bypass: Vulnerable Aug 13 00:51:33.035969 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:51:33.035983 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:51:33.035997 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:51:33.036010 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:51:33.036024 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:51:33.036037 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:51:33.036052 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 13 00:51:33.036068 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 13 00:51:33.036082 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 13 00:51:33.036095 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:51:33.036109 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Aug 13 00:51:33.036123 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Aug 13 00:51:33.036137 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Aug 13 00:51:33.036151 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Aug 13 00:51:33.036164 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:51:33.036178 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:51:33.036192 kernel: LSM: Security Framework initializing Aug 13 00:51:33.036205 kernel: SELinux: Initializing. Aug 13 00:51:33.036219 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:51:33.036236 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:51:33.036250 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 13 00:51:33.036264 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 13 00:51:33.036278 kernel: signal: max sigframe size: 3632 Aug 13 00:51:33.036292 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:51:33.036306 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 00:51:33.036320 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:51:33.036334 kernel: x86: Booting SMP configuration: Aug 13 00:51:33.036348 kernel: .... node #0, CPUs: #1 Aug 13 00:51:33.036362 kernel: Transient Scheduler Attacks: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Aug 13 00:51:33.036381 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 00:51:33.036395 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:51:33.036409 kernel: smpboot: Max logical packages: 1 Aug 13 00:51:33.036423 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Aug 13 00:51:33.036437 kernel: devtmpfs: initialized Aug 13 00:51:33.036451 kernel: x86/mm: Memory block size: 128MB Aug 13 00:51:33.036465 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Aug 13 00:51:33.036479 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:51:33.036495 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:51:33.036509 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:51:33.036523 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:51:33.036536 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:51:33.036551 kernel: audit: type=2000 audit(1755046291.023:1): state=initialized audit_enabled=0 res=1 Aug 13 00:51:33.036564 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:51:33.036578 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:51:33.036593 kernel: cpuidle: using governor menu Aug 13 00:51:33.036606 kernel: ACPI: bus type PCI registered Aug 13 00:51:33.036622 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:51:33.036634 kernel: dca service started, version 1.12.1 Aug 13 00:51:33.036646 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:51:33.036658 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:51:33.036675 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:51:33.036691 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:51:33.036712 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:51:33.036732 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:51:33.036747 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:51:33.036762 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:51:33.036776 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:51:33.036788 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:51:33.036802 kernel: ACPI: Interpreter enabled Aug 13 00:51:33.036814 kernel: ACPI: PM: (supports S0 S5) Aug 13 00:51:33.036827 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:51:33.036841 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:51:33.036854 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Aug 13 00:51:33.036867 kernel: iommu: Default domain type: Translated Aug 13 00:51:33.036883 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:51:33.036896 kernel: vgaarb: loaded Aug 13 00:51:33.036909 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:51:33.036937 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:51:33.036951 kernel: PTP clock support registered Aug 13 00:51:33.036964 kernel: Registered efivars operations Aug 13 00:51:33.036978 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:51:33.036990 kernel: PCI: System does not support PCI Aug 13 00:51:33.037002 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Aug 13 00:51:33.037018 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:51:33.037040 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:51:33.037055 kernel: pnp: PnP ACPI init Aug 13 00:51:33.037068 kernel: pnp: PnP ACPI: found 3 devices Aug 13 00:51:33.037081 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:51:33.037095 kernel: NET: Registered PF_INET protocol family Aug 13 00:51:33.037109 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 00:51:33.037124 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 00:51:33.037138 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:51:33.037154 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:51:33.037168 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Aug 13 00:51:33.037182 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 00:51:33.037195 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 00:51:33.037209 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 00:51:33.037222 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:51:33.037235 kernel: NET: Registered PF_XDP protocol family Aug 13 00:51:33.037248 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:51:33.037268 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 00:51:33.037285 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Aug 13 00:51:33.037298 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 00:51:33.037312 kernel: Initialise system trusted keyrings Aug 13 00:51:33.037325 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 00:51:33.037344 kernel: Key type asymmetric registered Aug 13 00:51:33.037358 kernel: Asymmetric key parser 'x509' registered Aug 13 00:51:33.037372 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:51:33.037385 kernel: io scheduler mq-deadline registered Aug 13 00:51:33.037398 kernel: io scheduler kyber registered Aug 13 00:51:33.037420 kernel: io scheduler bfq registered Aug 13 00:51:33.037434 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:51:33.037448 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:51:33.037461 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:51:33.037474 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 00:51:33.037494 kernel: i8042: PNP: No PS/2 controller found. Aug 13 00:51:33.037675 kernel: rtc_cmos 00:02: registered as rtc0 Aug 13 00:51:33.037793 kernel: rtc_cmos 00:02: setting system clock to 2025-08-13T00:51:32 UTC (1755046292) Aug 13 00:51:33.037916 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Aug 13 00:51:33.037944 kernel: intel_pstate: CPU model not supported Aug 13 00:51:33.037958 kernel: efifb: probing for efifb Aug 13 00:51:33.037972 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 00:51:33.037985 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 00:51:33.037998 kernel: efifb: scrolling: redraw Aug 13 00:51:33.038012 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:51:33.038030 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:51:33.038048 kernel: fb0: EFI VGA frame buffer device Aug 13 00:51:33.038062 kernel: pstore: Registered efi as persistent store backend Aug 13 00:51:33.038076 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:51:33.038090 kernel: Segment Routing with IPv6 Aug 13 00:51:33.038103 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:51:33.038116 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:51:33.038129 kernel: Key type dns_resolver registered Aug 13 00:51:33.038142 kernel: IPI shorthand broadcast: enabled Aug 13 00:51:33.038160 kernel: sched_clock: Marking stable (697654800, 19773900)->(883012600, -165583900) Aug 13 00:51:33.038175 kernel: registered taskstats version 1 Aug 13 00:51:33.038192 kernel: Loading compiled-in X.509 certificates Aug 13 00:51:33.038206 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 00:51:33.038219 kernel: Key type .fscrypt registered Aug 13 00:51:33.038232 kernel: Key type fscrypt-provisioning registered Aug 13 00:51:33.038246 kernel: pstore: Using crash dump compression: deflate Aug 13 00:51:33.038260 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:51:33.038274 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:51:33.038288 kernel: ima: No architecture policies found Aug 13 00:51:33.038303 kernel: clk: Disabling unused clocks Aug 13 00:51:33.038316 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 00:51:33.038335 kernel: Write protecting the kernel read-only data: 28672k Aug 13 00:51:33.038349 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 00:51:33.038363 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 00:51:33.038376 kernel: Run /init as init process Aug 13 00:51:33.038390 kernel: with arguments: Aug 13 00:51:33.038404 kernel: /init Aug 13 00:51:33.038417 kernel: with environment: Aug 13 00:51:33.038433 kernel: HOME=/ Aug 13 00:51:33.038453 kernel: TERM=linux Aug 13 00:51:33.038467 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:51:33.038483 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:51:33.038500 systemd[1]: Detected virtualization microsoft. Aug 13 00:51:33.038515 systemd[1]: Detected architecture x86-64. Aug 13 00:51:33.038528 systemd[1]: Running in initrd. Aug 13 00:51:33.038542 systemd[1]: No hostname configured, using default hostname. Aug 13 00:51:33.038567 systemd[1]: Hostname set to . Aug 13 00:51:33.038583 systemd[1]: Initializing machine ID from random generator. Aug 13 00:51:33.038597 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:51:33.038610 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:51:33.038629 systemd[1]: Reached target cryptsetup.target. Aug 13 00:51:33.038644 systemd[1]: Reached target paths.target. Aug 13 00:51:33.038658 systemd[1]: Reached target slices.target. Aug 13 00:51:33.038678 systemd[1]: Reached target swap.target. Aug 13 00:51:33.038695 systemd[1]: Reached target timers.target. Aug 13 00:51:33.038709 systemd[1]: Listening on iscsid.socket. Aug 13 00:51:33.038723 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:51:33.038744 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:51:33.038758 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:51:33.038773 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:51:33.038793 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:51:33.038807 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:51:33.038825 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:51:33.038845 systemd[1]: Reached target sockets.target. Aug 13 00:51:33.038860 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:51:33.038875 systemd[1]: Finished network-cleanup.service. Aug 13 00:51:33.038895 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:51:33.038909 systemd[1]: Starting systemd-journald.service... Aug 13 00:51:33.041960 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:51:33.041985 systemd[1]: Starting systemd-resolved.service... Aug 13 00:51:33.041999 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:51:33.042018 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:51:33.042033 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:51:33.042046 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:51:33.042061 kernel: audit: type=1130 audit(1755046293.021:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.042075 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:51:33.042093 systemd-journald[183]: Journal started Aug 13 00:51:33.042163 systemd-journald[183]: Runtime Journal (/run/log/journal/5b73a5fd651b4fbcaeaf33437877c384) is 8.0M, max 159.0M, 151.0M free. Aug 13 00:51:33.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.000960 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 00:51:33.056199 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:51:33.066938 systemd[1]: Started systemd-journald.service. Aug 13 00:51:33.074790 systemd-resolved[185]: Positive Trust Anchors: Aug 13 00:51:33.080374 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:51:33.096965 kernel: audit: type=1130 audit(1755046293.083:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.084726 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:51:33.107939 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:51:33.104683 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:51:33.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.120313 systemd-resolved[185]: Defaulting to hostname 'linux'. Aug 13 00:51:33.155026 kernel: audit: type=1130 audit(1755046293.083:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.155055 kernel: Bridge firewalling registered Aug 13 00:51:33.155072 kernel: audit: type=1130 audit(1755046293.136:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.129153 systemd[1]: Started systemd-resolved.service. Aug 13 00:51:33.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.131679 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 00:51:33.171998 kernel: audit: type=1130 audit(1755046293.147:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.137187 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:51:33.148754 systemd[1]: Reached target nss-lookup.target. Aug 13 00:51:33.151489 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:51:33.181859 dracut-cmdline[200]: dracut-dracut-053 Aug 13 00:51:33.185552 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:51:33.209941 kernel: SCSI subsystem initialized Aug 13 00:51:33.230942 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:51:33.230987 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:51:33.239153 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:51:33.243187 systemd-modules-load[184]: Inserted module 'dm_multipath' Aug 13 00:51:33.245777 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:51:33.262388 kernel: audit: type=1130 audit(1755046293.247:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.260883 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:51:33.273018 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:51:33.289523 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:51:33.289550 kernel: audit: type=1130 audit(1755046293.277:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.301944 kernel: iscsi: registered transport (tcp) Aug 13 00:51:33.328739 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:51:33.328812 kernel: QLogic iSCSI HBA Driver Aug 13 00:51:33.358816 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:51:33.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.363694 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:51:33.377036 kernel: audit: type=1130 audit(1755046293.361:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.417948 kernel: raid6: avx512x4 gen() 18178 MB/s Aug 13 00:51:33.436937 kernel: raid6: avx512x4 xor() 8222 MB/s Aug 13 00:51:33.455934 kernel: raid6: avx512x2 gen() 18237 MB/s Aug 13 00:51:33.475938 kernel: raid6: avx512x2 xor() 30020 MB/s Aug 13 00:51:33.494953 kernel: raid6: avx512x1 gen() 18254 MB/s Aug 13 00:51:33.513948 kernel: raid6: avx512x1 xor() 16600 MB/s Aug 13 00:51:33.533937 kernel: raid6: avx2x4 gen() 18043 MB/s Aug 13 00:51:33.552933 kernel: raid6: avx2x4 xor() 7750 MB/s Aug 13 00:51:33.571936 kernel: raid6: avx2x2 gen() 18165 MB/s Aug 13 00:51:33.591941 kernel: raid6: avx2x2 xor() 22187 MB/s Aug 13 00:51:33.610933 kernel: raid6: avx2x1 gen() 13703 MB/s Aug 13 00:51:33.630933 kernel: raid6: avx2x1 xor() 19459 MB/s Aug 13 00:51:33.650936 kernel: raid6: sse2x4 gen() 11671 MB/s Aug 13 00:51:33.669933 kernel: raid6: sse2x4 xor() 7263 MB/s Aug 13 00:51:33.688933 kernel: raid6: sse2x2 gen() 12779 MB/s Aug 13 00:51:33.708935 kernel: raid6: sse2x2 xor() 7492 MB/s Aug 13 00:51:33.728933 kernel: raid6: sse2x1 gen() 11575 MB/s Aug 13 00:51:33.747938 kernel: raid6: sse2x1 xor() 5909 MB/s Aug 13 00:51:33.747967 kernel: raid6: using algorithm avx512x1 gen() 18254 MB/s Aug 13 00:51:33.754446 kernel: raid6: .... xor() 16600 MB/s, rmw enabled Aug 13 00:51:33.754463 kernel: raid6: using avx512x2 recovery algorithm Aug 13 00:51:33.773950 kernel: xor: automatically using best checksumming function avx Aug 13 00:51:33.868952 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 00:51:33.877399 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:51:33.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.891947 kernel: audit: type=1130 audit(1755046293.878:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.890000 audit: BPF prog-id=7 op=LOAD Aug 13 00:51:33.890000 audit: BPF prog-id=8 op=LOAD Aug 13 00:51:33.890839 systemd[1]: Starting systemd-udevd.service... Aug 13 00:51:33.904525 systemd-udevd[382]: Using default interface naming scheme 'v252'. Aug 13 00:51:33.909251 systemd[1]: Started systemd-udevd.service. Aug 13 00:51:33.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.916340 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:51:33.933177 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Aug 13 00:51:33.962873 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:51:33.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.967528 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:51:34.002809 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:51:34.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.048942 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:51:34.077520 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:51:34.077578 kernel: AES CTR mode by8 optimization enabled Aug 13 00:51:34.088943 kernel: hv_vmbus: Vmbus version:5.2 Aug 13 00:51:34.098946 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 00:51:34.115945 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 13 00:51:34.124674 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:51:34.130938 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 00:51:34.140948 kernel: scsi host0: storvsc_host_t Aug 13 00:51:34.141126 kernel: scsi host1: storvsc_host_t Aug 13 00:51:34.141228 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 13 00:51:34.146946 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 00:51:34.146980 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 13 00:51:34.174972 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 00:51:34.193546 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 13 00:51:34.193610 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 00:51:34.202491 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 00:51:34.227010 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:51:34.227044 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 13 00:51:34.239489 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 00:51:34.239674 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:51:34.239844 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 13 00:51:34.240036 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 13 00:51:34.240209 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 00:51:34.240388 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:51:34.240408 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:51:34.344606 kernel: hv_netvsc 7c1e5204-72ee-7c1e-5204-72ee7c1e5204 eth0: VF slot 1 added Aug 13 00:51:34.356286 kernel: hv_vmbus: registering driver hv_pci Aug 13 00:51:34.356330 kernel: hv_pci 757a3cc4-ea6e-4f9f-80ce-f808bf877294: PCI VMBus probing: Using version 0x10004 Aug 13 00:51:34.408333 kernel: hv_pci 757a3cc4-ea6e-4f9f-80ce-f808bf877294: PCI host bridge to bus ea6e:00 Aug 13 00:51:34.408511 kernel: pci_bus ea6e:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Aug 13 00:51:34.408691 kernel: pci_bus ea6e:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 00:51:34.408838 kernel: pci ea6e:00:02.0: [15b3:1016] type 00 class 0x020000 Aug 13 00:51:34.409027 kernel: pci ea6e:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 13 00:51:34.409190 kernel: pci ea6e:00:02.0: enabling Extended Tags Aug 13 00:51:34.409344 kernel: pci ea6e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ea6e:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Aug 13 00:51:34.409500 kernel: pci_bus ea6e:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 00:51:34.409645 kernel: pci ea6e:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 13 00:51:34.501011 kernel: mlx5_core ea6e:00:02.0: enabling device (0000 -> 0002) Aug 13 00:51:34.766489 kernel: mlx5_core ea6e:00:02.0: firmware version: 14.30.5000 Aug 13 00:51:34.766621 kernel: mlx5_core ea6e:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Aug 13 00:51:34.766723 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (440) Aug 13 00:51:34.766734 kernel: mlx5_core ea6e:00:02.0: Supported tc offload range - chains: 1, prios: 1 Aug 13 00:51:34.766831 kernel: mlx5_core ea6e:00:02.0: mlx5e_tc_post_act_init:40:(pid 16): firmware level support is missing Aug 13 00:51:34.766949 kernel: hv_netvsc 7c1e5204-72ee-7c1e-5204-72ee7c1e5204 eth0: VF registering: eth1 Aug 13 00:51:34.767046 kernel: mlx5_core ea6e:00:02.0 eth1: joined to eth0 Aug 13 00:51:34.666727 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:51:34.689535 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:51:34.777942 kernel: mlx5_core ea6e:00:02.0 enP60014s1: renamed from eth1 Aug 13 00:51:34.923591 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:51:34.952304 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:51:34.954621 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:51:34.959901 systemd[1]: Starting disk-uuid.service... Aug 13 00:51:34.977944 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:51:34.989948 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:51:34.999942 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:51:35.999949 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:51:36.000124 disk-uuid[563]: The operation has completed successfully. Aug 13 00:51:36.074500 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:51:36.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:36.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:36.074604 systemd[1]: Finished disk-uuid.service. Aug 13 00:51:36.089486 systemd[1]: Starting verity-setup.service... Aug 13 00:51:36.134942 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 00:51:36.522575 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:51:36.526226 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:51:36.532467 systemd[1]: Finished verity-setup.service. Aug 13 00:51:36.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:36.606954 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:51:36.606405 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:51:36.609682 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:51:36.613645 systemd[1]: Starting ignition-setup.service... Aug 13 00:51:36.617886 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:51:36.641265 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:51:36.641314 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:51:36.641328 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:51:36.689988 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:51:36.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:36.691000 audit: BPF prog-id=9 op=LOAD Aug 13 00:51:36.692894 systemd[1]: Starting systemd-networkd.service... Aug 13 00:51:36.717824 systemd-networkd[827]: lo: Link UP Aug 13 00:51:36.718953 systemd-networkd[827]: lo: Gained carrier Aug 13 00:51:36.719872 systemd-networkd[827]: Enumeration completed Aug 13 00:51:36.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:36.719954 systemd[1]: Started systemd-networkd.service. Aug 13 00:51:36.722135 systemd-networkd[827]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:51:36.723128 systemd[1]: Reached target network.target. Aug 13 00:51:36.728374 systemd[1]: Starting iscsiuio.service... Aug 13 00:51:36.737091 systemd[1]: Started iscsiuio.service. Aug 13 00:51:36.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:36.740852 systemd[1]: Starting iscsid.service... Aug 13 00:51:36.744268 iscsid[832]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:51:36.744268 iscsid[832]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Aug 13 00:51:36.744268 iscsid[832]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:51:36.744268 iscsid[832]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:51:36.744268 iscsid[832]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:51:36.744268 iscsid[832]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:51:36.744268 iscsid[832]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:51:36.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:36.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:36.746002 systemd[1]: Started iscsid.service. Aug 13 00:51:36.749110 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:51:36.771551 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:51:36.773644 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:51:36.778378 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:51:36.780388 systemd[1]: Reached target remote-fs.target. Aug 13 00:51:36.792700 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:51:36.800703 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:51:36.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:36.815941 kernel: mlx5_core ea6e:00:02.0 enP60014s1: Link up Aug 13 00:51:36.854948 kernel: hv_netvsc 7c1e5204-72ee-7c1e-5204-72ee7c1e5204 eth0: Data path switched to VF: enP60014s1 Aug 13 00:51:36.855150 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:51:36.859181 systemd-networkd[827]: enP60014s1: Link UP Aug 13 00:51:36.859308 systemd-networkd[827]: eth0: Link UP Aug 13 00:51:36.859537 systemd-networkd[827]: eth0: Gained carrier Aug 13 00:51:36.866183 systemd-networkd[827]: enP60014s1: Gained carrier Aug 13 00:51:36.868299 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:51:36.893060 systemd-networkd[827]: eth0: DHCPv4 address 10.200.4.32/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 00:51:36.961372 systemd[1]: Finished ignition-setup.service. Aug 13 00:51:36.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:36.966127 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:51:38.319178 systemd-networkd[827]: eth0: Gained IPv6LL Aug 13 00:51:40.623539 ignition[854]: Ignition 2.14.0 Aug 13 00:51:40.623557 ignition[854]: Stage: fetch-offline Aug 13 00:51:40.623655 ignition[854]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:40.623708 ignition[854]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:51:40.722194 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:51:40.722398 ignition[854]: parsed url from cmdline: "" Aug 13 00:51:40.722403 ignition[854]: no config URL provided Aug 13 00:51:40.722417 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:51:40.750051 kernel: kauditd_printk_skb: 16 callbacks suppressed Aug 13 00:51:40.750083 kernel: audit: type=1130 audit(1755046300.730:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.728326 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:51:40.722426 ignition[854]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:51:40.732935 systemd[1]: Starting ignition-fetch.service... Aug 13 00:51:40.722434 ignition[854]: failed to fetch config: resource requires networking Aug 13 00:51:40.722675 ignition[854]: Ignition finished successfully Aug 13 00:51:40.741571 ignition[860]: Ignition 2.14.0 Aug 13 00:51:40.741578 ignition[860]: Stage: fetch Aug 13 00:51:40.741679 ignition[860]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:40.741709 ignition[860]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:51:40.745854 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:51:40.746990 ignition[860]: parsed url from cmdline: "" Aug 13 00:51:40.746998 ignition[860]: no config URL provided Aug 13 00:51:40.747011 ignition[860]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:51:40.747025 ignition[860]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:51:40.747079 ignition[860]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 00:51:40.819659 ignition[860]: GET result: OK Aug 13 00:51:40.819817 ignition[860]: config has been read from IMDS userdata Aug 13 00:51:40.821208 ignition[860]: parsing config with SHA512: 8a6a7498e2ebb54785d67a32e4ccc7f4278de031f3af7a9f441f0831693967394e122350d7516132634020b36429d218877c2ae9dd79e1d55966420a0adb15d8 Aug 13 00:51:40.842064 unknown[860]: fetched base config from "system" Aug 13 00:51:40.844276 unknown[860]: fetched base config from "system" Aug 13 00:51:40.844287 unknown[860]: fetched user config from "azure" Aug 13 00:51:40.845079 ignition[860]: fetch: fetch complete Aug 13 00:51:40.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.846819 systemd[1]: Finished ignition-fetch.service. Aug 13 00:51:40.864844 kernel: audit: type=1130 audit(1755046300.848:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.845086 ignition[860]: fetch: fetch passed Aug 13 00:51:40.849655 systemd[1]: Starting ignition-kargs.service... Aug 13 00:51:40.845137 ignition[860]: Ignition finished successfully Aug 13 00:51:40.873212 ignition[866]: Ignition 2.14.0 Aug 13 00:51:40.873223 ignition[866]: Stage: kargs Aug 13 00:51:40.873350 ignition[866]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:40.873383 ignition[866]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:51:40.877852 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:51:40.881143 ignition[866]: kargs: kargs passed Aug 13 00:51:40.881198 ignition[866]: Ignition finished successfully Aug 13 00:51:40.884728 systemd[1]: Finished ignition-kargs.service. Aug 13 00:51:40.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.887374 systemd[1]: Starting ignition-disks.service... Aug 13 00:51:40.901407 kernel: audit: type=1130 audit(1755046300.886:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.906756 ignition[872]: Ignition 2.14.0 Aug 13 00:51:40.906766 ignition[872]: Stage: disks Aug 13 00:51:40.906903 ignition[872]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:40.906955 ignition[872]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:51:40.916421 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:51:40.917601 ignition[872]: disks: disks passed Aug 13 00:51:40.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.919056 systemd[1]: Finished ignition-disks.service. Aug 13 00:51:40.936890 kernel: audit: type=1130 audit(1755046300.920:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.917638 ignition[872]: Ignition finished successfully Aug 13 00:51:40.921059 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:51:40.933427 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:51:40.936866 systemd[1]: Reached target local-fs.target. Aug 13 00:51:40.938646 systemd[1]: Reached target sysinit.target. Aug 13 00:51:40.940280 systemd[1]: Reached target basic.target. Aug 13 00:51:40.944492 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:51:41.030606 systemd-fsck[880]: ROOT: clean, 629/7326000 files, 481083/7359488 blocks Aug 13 00:51:41.036511 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:51:41.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.041546 systemd[1]: Mounting sysroot.mount... Aug 13 00:51:41.055856 kernel: audit: type=1130 audit(1755046301.039:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.070945 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:51:41.072010 systemd[1]: Mounted sysroot.mount. Aug 13 00:51:41.073853 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:51:41.106542 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:51:41.112618 systemd[1]: Starting flatcar-metadata-hostname.service... Aug 13 00:51:41.117058 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:51:41.117103 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:51:41.125280 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:51:41.200294 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:51:41.205317 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:51:41.224945 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (891) Aug 13 00:51:41.230068 initrd-setup-root[896]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:51:41.240028 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:51:41.240049 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:51:41.240059 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:51:41.241907 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:51:41.265832 initrd-setup-root[922]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:51:41.285816 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:51:41.307279 initrd-setup-root[938]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:51:41.951441 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:51:41.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.954268 systemd[1]: Starting ignition-mount.service... Aug 13 00:51:41.972697 kernel: audit: type=1130 audit(1755046301.953:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.973042 systemd[1]: Starting sysroot-boot.service... Aug 13 00:51:41.978163 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Aug 13 00:51:41.978294 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Aug 13 00:51:42.000740 systemd[1]: Finished sysroot-boot.service. Aug 13 00:51:42.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:42.016043 kernel: audit: type=1130 audit(1755046302.004:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:42.050177 ignition[960]: INFO : Ignition 2.14.0 Aug 13 00:51:42.052083 ignition[960]: INFO : Stage: mount Aug 13 00:51:42.052083 ignition[960]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:42.052083 ignition[960]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:51:42.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:42.064016 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:51:42.064016 ignition[960]: INFO : mount: mount passed Aug 13 00:51:42.064016 ignition[960]: INFO : Ignition finished successfully Aug 13 00:51:42.080261 kernel: audit: type=1130 audit(1755046302.063:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:42.061121 systemd[1]: Finished ignition-mount.service. Aug 13 00:51:42.855296 coreos-metadata[890]: Aug 13 00:51:42.855 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:51:42.873967 coreos-metadata[890]: Aug 13 00:51:42.873 INFO Fetch successful Aug 13 00:51:42.907758 coreos-metadata[890]: Aug 13 00:51:42.907 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:51:42.918438 coreos-metadata[890]: Aug 13 00:51:42.918 INFO Fetch successful Aug 13 00:51:42.938747 coreos-metadata[890]: Aug 13 00:51:42.938 INFO wrote hostname ci-3510.3.8-a-4e9ab5f8c8 to /sysroot/etc/hostname Aug 13 00:51:42.944414 systemd[1]: Finished flatcar-metadata-hostname.service. Aug 13 00:51:42.947625 systemd[1]: Starting ignition-files.service... Aug 13 00:51:42.964833 kernel: audit: type=1130 audit(1755046302.945:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:42.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:42.963408 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:51:42.979941 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (969) Aug 13 00:51:42.988338 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:51:42.988372 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:51:42.988385 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:51:42.999018 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:51:43.010930 ignition[988]: INFO : Ignition 2.14.0 Aug 13 00:51:43.010930 ignition[988]: INFO : Stage: files Aug 13 00:51:43.014196 ignition[988]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:43.014196 ignition[988]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:51:43.026432 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:51:43.047492 ignition[988]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:51:43.063909 ignition[988]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:51:43.063909 ignition[988]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:51:43.125672 ignition[988]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:51:43.129268 ignition[988]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:51:43.163871 unknown[988]: wrote ssh authorized keys file for user: core Aug 13 00:51:43.167126 ignition[988]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:51:43.179605 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:51:43.183873 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 00:51:43.244236 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:51:43.457994 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:51:43.472187 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:51:43.475979 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:51:43.649854 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:51:43.694255 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:51:43.698351 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:51:43.702538 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:51:43.706393 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:51:43.710436 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:51:43.714427 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:51:43.718647 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:51:43.722646 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:51:43.726716 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:51:43.741065 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:51:43.745520 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:51:43.749575 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:51:43.755419 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:51:43.765602 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Aug 13 00:51:43.769990 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:51:43.778406 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3634462367" Aug 13 00:51:43.782729 ignition[988]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3634462367": device or resource busy Aug 13 00:51:43.782729 ignition[988]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3634462367", trying btrfs: device or resource busy Aug 13 00:51:43.782729 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3634462367" Aug 13 00:51:43.798057 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3634462367" Aug 13 00:51:43.798057 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3634462367" Aug 13 00:51:43.798057 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3634462367" Aug 13 00:51:43.798057 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Aug 13 00:51:43.798057 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Aug 13 00:51:43.798057 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:51:43.795333 systemd[1]: mnt-oem3634462367.mount: Deactivated successfully. Aug 13 00:51:43.825633 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem983645384" Aug 13 00:51:43.825633 ignition[988]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem983645384": device or resource busy Aug 13 00:51:43.825633 ignition[988]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem983645384", trying btrfs: device or resource busy Aug 13 00:51:43.825633 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem983645384" Aug 13 00:51:43.825633 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem983645384" Aug 13 00:51:43.825633 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem983645384" Aug 13 00:51:43.825633 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem983645384" Aug 13 00:51:43.825633 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Aug 13 00:51:43.825633 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:51:43.825633 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 00:51:44.247902 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Aug 13 00:51:44.419105 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:51:44.419105 ignition[988]: INFO : files: op(14): [started] processing unit "waagent.service" Aug 13 00:51:44.419105 ignition[988]: INFO : files: op(14): [finished] processing unit "waagent.service" Aug 13 00:51:44.431836 ignition[988]: INFO : files: op(15): [started] processing unit "nvidia.service" Aug 13 00:51:44.431836 ignition[988]: INFO : files: op(15): [finished] processing unit "nvidia.service" Aug 13 00:51:44.431836 ignition[988]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Aug 13 00:51:44.431836 ignition[988]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:51:44.431836 ignition[988]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:51:44.431836 ignition[988]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Aug 13 00:51:44.431836 ignition[988]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Aug 13 00:51:44.431836 ignition[988]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Aug 13 00:51:44.431836 ignition[988]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Aug 13 00:51:44.431836 ignition[988]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Aug 13 00:51:44.431836 ignition[988]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:51:44.431836 ignition[988]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:51:44.486016 kernel: audit: type=1130 audit(1755046304.459:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.456088 systemd[1]: Finished ignition-files.service. Aug 13 00:51:44.488109 ignition[988]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:51:44.488109 ignition[988]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:51:44.488109 ignition[988]: INFO : files: files passed Aug 13 00:51:44.488109 ignition[988]: INFO : Ignition finished successfully Aug 13 00:51:44.473888 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:51:44.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.476168 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:51:44.477117 systemd[1]: Starting ignition-quench.service... Aug 13 00:51:44.508960 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:51:44.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.496451 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:51:44.496557 systemd[1]: Finished ignition-quench.service. Aug 13 00:51:44.508984 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:51:44.511169 systemd[1]: Reached target ignition-complete.target. Aug 13 00:51:44.517035 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:51:44.535305 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:51:44.535412 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:51:44.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.539307 systemd[1]: Reached target initrd-fs.target. Aug 13 00:51:44.542568 systemd[1]: Reached target initrd.target. Aug 13 00:51:44.544283 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:51:44.545050 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:51:44.557773 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:51:44.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.561870 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:51:44.571421 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:51:44.574840 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:51:44.577349 systemd[1]: Stopped target timers.target. Aug 13 00:51:44.580315 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:51:44.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.580416 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:51:44.583780 systemd[1]: Stopped target initrd.target. Aug 13 00:51:44.587351 systemd[1]: Stopped target basic.target. Aug 13 00:51:44.590666 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:51:44.594028 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:51:44.597703 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:51:44.601435 systemd[1]: Stopped target remote-fs.target. Aug 13 00:51:44.604975 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:51:44.608461 systemd[1]: Stopped target sysinit.target. Aug 13 00:51:44.611776 systemd[1]: Stopped target local-fs.target. Aug 13 00:51:44.615106 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:51:44.618308 systemd[1]: Stopped target swap.target. Aug 13 00:51:44.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.621385 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:51:44.621515 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:51:44.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.624910 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:51:44.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.628018 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:51:44.628156 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:51:44.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.631978 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:51:44.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.661102 iscsid[832]: iscsid shutting down. Aug 13 00:51:44.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.632113 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:51:44.667384 ignition[1026]: INFO : Ignition 2.14.0 Aug 13 00:51:44.667384 ignition[1026]: INFO : Stage: umount Aug 13 00:51:44.667384 ignition[1026]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:44.667384 ignition[1026]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:51:44.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.635709 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:51:44.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.685465 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:51:44.685465 ignition[1026]: INFO : umount: umount passed Aug 13 00:51:44.685465 ignition[1026]: INFO : Ignition finished successfully Aug 13 00:51:44.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.635836 systemd[1]: Stopped ignition-files.service. Aug 13 00:51:44.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.638963 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:51:44.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.639114 systemd[1]: Stopped flatcar-metadata-hostname.service. Aug 13 00:51:44.643825 systemd[1]: Stopping ignition-mount.service... Aug 13 00:51:44.647233 systemd[1]: Stopping iscsid.service... Aug 13 00:51:44.648764 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:51:44.648946 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:51:44.652024 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:51:44.653683 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:51:44.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.653858 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:51:44.656165 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:51:44.656317 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:51:44.664818 systemd[1]: iscsid.service: Deactivated successfully. Aug 13 00:51:44.664952 systemd[1]: Stopped iscsid.service. Aug 13 00:51:44.675416 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:51:44.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.675498 systemd[1]: Stopped ignition-mount.service. Aug 13 00:51:44.682585 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:51:44.682669 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:51:44.686208 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:51:44.686258 systemd[1]: Stopped ignition-disks.service. Aug 13 00:51:44.689844 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:51:44.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.689886 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:51:44.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.697590 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:51:44.697641 systemd[1]: Stopped ignition-fetch.service. Aug 13 00:51:44.769000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:51:44.699270 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:51:44.699311 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:51:44.703463 systemd[1]: Stopped target paths.target. Aug 13 00:51:44.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.705180 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:51:44.709981 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:51:44.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.712753 systemd[1]: Stopped target slices.target. Aug 13 00:51:44.717702 systemd[1]: Stopped target sockets.target. Aug 13 00:51:44.719440 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:51:44.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.719478 systemd[1]: Closed iscsid.socket. Aug 13 00:51:44.722538 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:51:44.722586 systemd[1]: Stopped ignition-setup.service. Aug 13 00:51:44.728969 systemd[1]: Stopping iscsiuio.service... Aug 13 00:51:44.732134 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 00:51:44.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.732229 systemd[1]: Stopped iscsiuio.service. Aug 13 00:51:44.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.740640 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:51:44.740974 systemd[1]: Stopped target network.target. Aug 13 00:51:44.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.742781 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:51:44.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.742825 systemd[1]: Closed iscsiuio.socket. Aug 13 00:51:44.748071 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:51:44.750468 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:51:44.754973 systemd-networkd[827]: eth0: DHCPv6 lease lost Aug 13 00:51:44.756683 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:51:44.835000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:51:44.756773 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:51:44.762895 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:51:44.762999 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:51:44.765818 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:51:44.852717 kernel: hv_netvsc 7c1e5204-72ee-7c1e-5204-72ee7c1e5204 eth0: Data path switched from VF: enP60014s1 Aug 13 00:51:44.765849 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:51:44.773415 systemd[1]: Stopping network-cleanup.service... Aug 13 00:51:44.777340 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:51:44.777403 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:51:44.780853 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:51:44.780903 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:51:44.787159 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:51:44.787203 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:51:44.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:44.789424 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:51:44.795020 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:51:44.795523 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:51:44.795648 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:51:44.800078 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:51:44.800121 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:51:44.805006 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:51:44.805049 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:51:44.808319 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:51:44.808369 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:51:44.812082 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:51:44.812130 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:51:44.813818 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:51:44.813856 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:51:44.818012 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:51:44.822017 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:51:44.822090 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:51:44.826648 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:51:44.826739 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:51:44.867121 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:51:44.867220 systemd[1]: Stopped network-cleanup.service. Aug 13 00:51:45.484125 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:51:45.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.484286 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:51:45.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.488733 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:51:45.490863 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:51:45.490933 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:51:45.495627 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:51:45.508517 systemd[1]: Switching root. Aug 13 00:51:45.533186 systemd-journald[183]: Journal stopped Aug 13 00:52:07.588143 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 13 00:52:07.588170 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:52:07.588181 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:52:07.588189 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:52:07.588197 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:52:07.588205 kernel: SELinux: policy capability open_perms=1 Aug 13 00:52:07.588216 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:52:07.588224 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:52:07.588232 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:52:07.588240 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:52:07.588248 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:52:07.588255 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:52:07.588263 kernel: kauditd_printk_skb: 42 callbacks suppressed Aug 13 00:52:07.588275 kernel: audit: type=1403 audit(1755046308.648:79): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:52:07.588288 systemd[1]: Successfully loaded SELinux policy in 341.491ms. Aug 13 00:52:07.588300 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.464ms. Aug 13 00:52:07.588313 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:52:07.588325 systemd[1]: Detected virtualization microsoft. Aug 13 00:52:07.588337 systemd[1]: Detected architecture x86-64. Aug 13 00:52:07.588349 systemd[1]: Detected first boot. Aug 13 00:52:07.588361 systemd[1]: Hostname set to . Aug 13 00:52:07.588370 systemd[1]: Initializing machine ID from random generator. Aug 13 00:52:07.588382 kernel: audit: type=1400 audit(1755046309.721:80): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:52:07.588395 kernel: audit: type=1400 audit(1755046309.738:81): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:52:07.588404 kernel: audit: type=1400 audit(1755046309.738:82): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:52:07.588418 kernel: audit: type=1334 audit(1755046309.749:83): prog-id=10 op=LOAD Aug 13 00:52:07.588429 kernel: audit: type=1334 audit(1755046309.749:84): prog-id=10 op=UNLOAD Aug 13 00:52:07.588441 kernel: audit: type=1334 audit(1755046309.760:85): prog-id=11 op=LOAD Aug 13 00:52:07.588452 kernel: audit: type=1334 audit(1755046309.760:86): prog-id=11 op=UNLOAD Aug 13 00:52:07.588464 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:52:07.588473 kernel: audit: type=1400 audit(1755046311.510:87): avc: denied { associate } for pid=1061 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 00:52:07.588485 kernel: audit: type=1300 audit(1755046311.510:87): arch=c000003e syscall=188 success=yes exit=0 a0=c00014d3a2 a1=c0000ce708 a2=c0000d6c00 a3=32 items=0 ppid=1044 pid=1061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.588499 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:52:07.588513 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:52:07.588530 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:52:07.588548 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:52:07.588565 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 00:52:07.588583 kernel: audit: type=1334 audit(1755046326.966:89): prog-id=12 op=LOAD Aug 13 00:52:07.588601 kernel: audit: type=1334 audit(1755046326.966:90): prog-id=3 op=UNLOAD Aug 13 00:52:07.588625 kernel: audit: type=1334 audit(1755046326.971:91): prog-id=13 op=LOAD Aug 13 00:52:07.588652 kernel: audit: type=1334 audit(1755046326.975:92): prog-id=14 op=LOAD Aug 13 00:52:07.588671 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:52:07.588690 kernel: audit: type=1334 audit(1755046326.975:93): prog-id=4 op=UNLOAD Aug 13 00:52:07.588708 kernel: audit: type=1334 audit(1755046326.975:94): prog-id=5 op=UNLOAD Aug 13 00:52:07.588726 kernel: audit: type=1131 audit(1755046326.976:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.588747 systemd[1]: Stopped initrd-switch-root.service. Aug 13 00:52:07.588767 kernel: audit: type=1130 audit(1755046327.018:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.588788 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:52:07.588807 kernel: audit: type=1334 audit(1755046327.018:97): prog-id=12 op=UNLOAD Aug 13 00:52:07.588826 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:52:07.588844 kernel: audit: type=1131 audit(1755046327.018:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.588864 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:52:07.588883 systemd[1]: Created slice system-getty.slice. Aug 13 00:52:07.588901 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:52:07.588950 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:52:07.588971 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:52:07.588988 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:52:07.589007 systemd[1]: Created slice user.slice. Aug 13 00:52:07.589029 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:52:07.589047 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:52:07.589066 systemd[1]: Set up automount boot.automount. Aug 13 00:52:07.589087 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:52:07.589108 systemd[1]: Stopped target initrd-switch-root.target. Aug 13 00:52:07.589132 systemd[1]: Stopped target initrd-fs.target. Aug 13 00:52:07.589150 systemd[1]: Stopped target initrd-root-fs.target. Aug 13 00:52:07.589171 systemd[1]: Reached target integritysetup.target. Aug 13 00:52:07.589190 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:52:07.589209 systemd[1]: Reached target remote-fs.target. Aug 13 00:52:07.589228 systemd[1]: Reached target slices.target. Aug 13 00:52:07.589247 systemd[1]: Reached target swap.target. Aug 13 00:52:07.589265 systemd[1]: Reached target torcx.target. Aug 13 00:52:07.589285 systemd[1]: Reached target veritysetup.target. Aug 13 00:52:07.589304 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:52:07.589322 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:52:07.589340 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:52:07.589360 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:52:07.589382 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:52:07.589402 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:52:07.589421 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:52:07.589439 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:52:07.589459 systemd[1]: Mounting media.mount... Aug 13 00:52:07.589474 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:52:07.589490 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:52:07.589506 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:52:07.589544 systemd[1]: Mounting tmp.mount... Aug 13 00:52:07.589563 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:52:07.589581 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:52:07.589598 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:52:07.589614 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:52:07.589631 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:52:07.589649 systemd[1]: Starting modprobe@drm.service... Aug 13 00:52:07.589665 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:52:07.589680 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:52:07.589696 systemd[1]: Starting modprobe@loop.service... Aug 13 00:52:07.589729 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:52:07.589745 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:52:07.589759 systemd[1]: Stopped systemd-fsck-root.service. Aug 13 00:52:07.589770 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:52:07.589782 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:52:07.589795 systemd[1]: Stopped systemd-journald.service. Aug 13 00:52:07.589807 systemd[1]: Starting systemd-journald.service... Aug 13 00:52:07.589819 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:52:07.589833 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:52:07.589845 kernel: loop: module loaded Aug 13 00:52:07.589859 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:52:07.589870 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:52:07.589883 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:52:07.589895 systemd[1]: Stopped verity-setup.service. Aug 13 00:52:07.589906 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:52:07.589918 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:52:07.590024 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:52:07.590037 systemd[1]: Mounted media.mount. Aug 13 00:52:07.590051 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:52:07.590061 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:52:07.590071 systemd[1]: Mounted tmp.mount. Aug 13 00:52:07.590083 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:52:07.590095 kernel: fuse: init (API version 7.34) Aug 13 00:52:07.590105 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:52:07.590125 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:52:07.590137 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:52:07.590149 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:52:07.590161 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:52:07.590175 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:52:07.590188 systemd[1]: Finished modprobe@drm.service. Aug 13 00:52:07.590201 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:52:07.590212 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:52:07.590224 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:52:07.590237 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:52:07.590253 systemd-journald[1145]: Journal started Aug 13 00:52:07.590306 systemd-journald[1145]: Runtime Journal (/run/log/journal/fe2e0409300a4186a22706b2abeb2669) is 8.0M, max 159.0M, 151.0M free. Aug 13 00:51:48.648000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:51:49.721000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:51:49.738000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:51:49.738000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:51:49.749000 audit: BPF prog-id=10 op=LOAD Aug 13 00:51:49.749000 audit: BPF prog-id=10 op=UNLOAD Aug 13 00:51:49.760000 audit: BPF prog-id=11 op=LOAD Aug 13 00:51:49.760000 audit: BPF prog-id=11 op=UNLOAD Aug 13 00:51:51.510000 audit[1061]: AVC avc: denied { associate } for pid=1061 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 00:51:51.510000 audit[1061]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d3a2 a1=c0000ce708 a2=c0000d6c00 a3=32 items=0 ppid=1044 pid=1061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:51.510000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:51:51.517000 audit[1061]: AVC avc: denied { associate } for pid=1061 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 00:51:51.517000 audit[1061]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d489 a2=1ed a3=0 items=2 ppid=1044 pid=1061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:51.517000 audit: CWD cwd="/" Aug 13 00:51:51.517000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:51.517000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:51.517000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:52:06.966000 audit: BPF prog-id=12 op=LOAD Aug 13 00:52:06.966000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:52:06.971000 audit: BPF prog-id=13 op=LOAD Aug 13 00:52:06.975000 audit: BPF prog-id=14 op=LOAD Aug 13 00:52:06.975000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:52:06.975000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:52:06.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.018000 audit: BPF prog-id=12 op=UNLOAD Aug 13 00:52:07.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.395000 audit: BPF prog-id=15 op=LOAD Aug 13 00:52:07.395000 audit: BPF prog-id=16 op=LOAD Aug 13 00:52:07.395000 audit: BPF prog-id=17 op=LOAD Aug 13 00:52:07.395000 audit: BPF prog-id=13 op=UNLOAD Aug 13 00:52:07.395000 audit: BPF prog-id=14 op=UNLOAD Aug 13 00:52:07.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.585000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:52:07.585000 audit[1145]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffea6cb6c00 a2=4000 a3=7ffea6cb6c9c items=0 ppid=1 pid=1145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.585000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:51:51.372056 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:51:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:52:06.965284 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:51:51.404576 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:51:51Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:52:06.965297 systemd[1]: Unnecessary job was removed for dev-sda6.device. Aug 13 00:51:51.404619 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:51:51Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:52:06.977352 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:51:51.404660 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:51:51Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Aug 13 00:51:51.404686 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:51:51Z" level=debug msg="skipped missing lower profile" missing profile=oem Aug 13 00:51:51.404739 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:51:51Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Aug 13 00:51:51.404755 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:51:51Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Aug 13 00:51:51.405023 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:51:51Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Aug 13 00:51:51.405071 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:51:51Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:51:51.405085 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:51:51Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:51:51.455741 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:51:51Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Aug 13 00:51:51.455804 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:51:51Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Aug 13 00:51:51.455832 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:51:51Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Aug 13 00:51:51.455858 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:51:51Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Aug 13 00:51:51.455882 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:51:51Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Aug 13 00:51:51.455898 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:51:51Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Aug 13 00:52:02.521491 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:52:02Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:52:02.521726 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:52:02Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:52:02.521843 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:52:02Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:52:02.522029 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:52:02Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:52:02.522076 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:52:02Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Aug 13 00:52:02.522130 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-08-13T00:52:02Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Aug 13 00:52:07.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.599260 systemd[1]: Started systemd-journald.service. Aug 13 00:52:07.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.601428 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:52:07.601584 systemd[1]: Finished modprobe@loop.service. Aug 13 00:52:07.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.603904 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:52:07.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.606440 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:52:07.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.608695 systemd[1]: Reached target network-pre.target. Aug 13 00:52:07.611743 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:52:07.615573 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:52:07.617385 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:52:07.703772 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:52:07.707469 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:52:07.709522 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:52:07.710645 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:52:07.712518 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:52:07.713664 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:52:07.717606 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:52:07.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.720509 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:52:07.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.723297 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:52:07.725530 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:52:07.728984 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:52:07.732020 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:52:07.757650 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:52:07.766232 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:52:07.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.768605 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:52:07.775165 systemd-journald[1145]: Time spent on flushing to /var/log/journal/fe2e0409300a4186a22706b2abeb2669 is 25.802ms for 1156 entries. Aug 13 00:52:07.775165 systemd-journald[1145]: System Journal (/var/log/journal/fe2e0409300a4186a22706b2abeb2669) is 8.0M, max 2.6G, 2.6G free. Aug 13 00:52:07.948124 systemd-journald[1145]: Received client request to flush runtime journal. Aug 13 00:52:07.949221 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:52:07.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.992462 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:52:08.767462 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:52:08.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:09.802137 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:52:09.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:09.804000 audit: BPF prog-id=18 op=LOAD Aug 13 00:52:09.804000 audit: BPF prog-id=19 op=LOAD Aug 13 00:52:09.804000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:52:09.804000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:52:09.806081 systemd[1]: Starting systemd-udevd.service... Aug 13 00:52:09.825083 systemd-udevd[1187]: Using default interface naming scheme 'v252'. Aug 13 00:52:11.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:11.574000 audit: BPF prog-id=20 op=LOAD Aug 13 00:52:11.571908 systemd[1]: Started systemd-udevd.service. Aug 13 00:52:11.578603 systemd[1]: Starting systemd-networkd.service... Aug 13 00:52:11.612559 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Aug 13 00:52:11.683944 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:52:11.695000 audit[1203]: AVC avc: denied { confidentiality } for pid=1203 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:52:11.706976 kernel: hv_vmbus: registering driver hv_balloon Aug 13 00:52:11.730973 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 13 00:52:11.738841 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 00:52:11.738915 kernel: hv_vmbus: registering driver hv_utils Aug 13 00:52:11.695000 audit[1203]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ffe4551ca0 a1=f83c a2=7fb1cc033bc5 a3=5 items=12 ppid=1187 pid=1203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:11.695000 audit: CWD cwd="/" Aug 13 00:52:11.695000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:11.695000 audit: PATH item=1 name=(null) inode=15184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:11.695000 audit: PATH item=2 name=(null) inode=15184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:11.695000 audit: PATH item=3 name=(null) inode=15185 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:11.695000 audit: PATH item=4 name=(null) inode=15184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:11.748065 kernel: hv_vmbus: registering driver hyperv_fb Aug 13 00:52:11.748122 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 13 00:52:11.748149 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 13 00:52:11.695000 audit: PATH item=5 name=(null) inode=15186 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:11.695000 audit: PATH item=6 name=(null) inode=15184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:11.695000 audit: PATH item=7 name=(null) inode=15187 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:11.695000 audit: PATH item=8 name=(null) inode=15184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:11.695000 audit: PATH item=9 name=(null) inode=15188 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:11.695000 audit: PATH item=10 name=(null) inode=15184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:11.695000 audit: PATH item=11 name=(null) inode=15189 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:52:11.695000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 00:52:11.753703 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 00:52:11.753757 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 00:52:11.759744 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 00:52:11.531288 kernel: Console: switching to colour dummy device 80x25 Aug 13 00:52:11.611651 systemd-journald[1145]: Time jumped backwards, rotating. Aug 13 00:52:11.611725 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:52:11.581000 audit: BPF prog-id=21 op=LOAD Aug 13 00:52:11.581000 audit: BPF prog-id=22 op=LOAD Aug 13 00:52:11.581000 audit: BPF prog-id=23 op=LOAD Aug 13 00:52:11.582833 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:52:11.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:11.645899 systemd[1]: Started systemd-userdbd.service. Aug 13 00:52:11.800961 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Aug 13 00:52:11.884748 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:52:11.950867 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:52:11.955966 kernel: kauditd_printk_skb: 63 callbacks suppressed Aug 13 00:52:11.956029 kernel: audit: type=1130 audit(1755046331.952:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:11.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:11.954951 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:52:12.362310 lvm[1265]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:52:12.425361 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:52:12.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:12.427739 systemd[1]: Reached target cryptsetup.target. Aug 13 00:52:12.439065 kernel: audit: type=1130 audit(1755046332.424:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:12.440498 systemd[1]: Starting lvm2-activation.service... Aug 13 00:52:12.445228 lvm[1266]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:52:12.463813 systemd[1]: Finished lvm2-activation.service. Aug 13 00:52:12.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:12.465727 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:52:12.476987 kernel: audit: type=1130 audit(1755046332.464:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:12.476994 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:52:12.477026 systemd[1]: Reached target local-fs.target. Aug 13 00:52:12.480236 systemd[1]: Reached target machines.target. Aug 13 00:52:12.483819 systemd[1]: Starting ldconfig.service... Aug 13 00:52:12.517845 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:52:12.517935 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:52:12.519306 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:52:12.523180 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:52:12.527101 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:52:12.530358 systemd[1]: Starting systemd-sysext.service... Aug 13 00:52:12.758019 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1268 (bootctl) Aug 13 00:52:12.759598 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:52:12.782469 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:52:12.808824 kernel: audit: type=1130 audit(1755046332.784:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:12.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:12.814769 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:52:12.815631 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:52:12.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:12.828433 kernel: audit: type=1130 audit(1755046332.816:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:12.831847 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:52:12.845401 systemd-networkd[1202]: lo: Link UP Aug 13 00:52:12.845409 systemd-networkd[1202]: lo: Gained carrier Aug 13 00:52:12.845847 systemd-networkd[1202]: Enumeration completed Aug 13 00:52:12.845986 systemd[1]: Started systemd-networkd.service. Aug 13 00:52:12.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:12.850448 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:52:12.858909 kernel: audit: type=1130 audit(1755046332.847:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:12.879912 systemd-networkd[1202]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:52:12.896679 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:52:12.896848 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:52:12.932962 kernel: mlx5_core ea6e:00:02.0 enP60014s1: Link up Aug 13 00:52:12.940966 kernel: loop0: detected capacity change from 0 to 224512 Aug 13 00:52:12.953095 kernel: hv_netvsc 7c1e5204-72ee-7c1e-5204-72ee7c1e5204 eth0: Data path switched to VF: enP60014s1 Aug 13 00:52:12.952042 systemd-networkd[1202]: enP60014s1: Link UP Aug 13 00:52:12.952288 systemd-networkd[1202]: eth0: Link UP Aug 13 00:52:12.952293 systemd-networkd[1202]: eth0: Gained carrier Aug 13 00:52:12.957484 systemd-networkd[1202]: enP60014s1: Gained carrier Aug 13 00:52:12.981059 systemd-networkd[1202]: eth0: DHCPv4 address 10.200.4.32/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 00:52:13.016968 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:52:13.033962 kernel: loop1: detected capacity change from 0 to 224512 Aug 13 00:52:13.048578 (sd-sysext)[1281]: Using extensions 'kubernetes'. Aug 13 00:52:13.049018 (sd-sysext)[1281]: Merged extensions into '/usr'. Aug 13 00:52:13.064637 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:52:13.066105 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:52:13.068363 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:52:13.070150 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:52:13.073437 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:52:13.076534 systemd[1]: Starting modprobe@loop.service... Aug 13 00:52:13.078324 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:52:13.078481 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:52:13.078616 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:52:13.081220 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:52:13.083332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:52:13.083483 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:52:13.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.086029 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:52:13.086137 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:52:13.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.096960 kernel: audit: type=1130 audit(1755046333.084:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.097010 kernel: audit: type=1131 audit(1755046333.084:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.109074 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:52:13.109200 systemd[1]: Finished modprobe@loop.service. Aug 13 00:52:13.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.111575 systemd[1]: Finished systemd-sysext.service. Aug 13 00:52:13.135057 kernel: audit: type=1130 audit(1755046333.106:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.136621 kernel: audit: type=1131 audit(1755046333.106:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.135078 systemd[1]: Starting ensure-sysext.service... Aug 13 00:52:13.136030 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:52:13.136129 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:52:13.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.137742 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:52:13.146379 systemd[1]: Reloading. Aug 13 00:52:13.215504 /usr/lib/systemd/system-generators/torcx-generator[1308]: time="2025-08-13T00:52:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:52:13.215548 /usr/lib/systemd/system-generators/torcx-generator[1308]: time="2025-08-13T00:52:13Z" level=info msg="torcx already run" Aug 13 00:52:13.308645 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:52:13.308666 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:52:13.324997 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:52:13.390000 audit: BPF prog-id=24 op=LOAD Aug 13 00:52:13.390000 audit: BPF prog-id=15 op=UNLOAD Aug 13 00:52:13.390000 audit: BPF prog-id=25 op=LOAD Aug 13 00:52:13.390000 audit: BPF prog-id=26 op=LOAD Aug 13 00:52:13.390000 audit: BPF prog-id=16 op=UNLOAD Aug 13 00:52:13.390000 audit: BPF prog-id=17 op=UNLOAD Aug 13 00:52:13.391000 audit: BPF prog-id=27 op=LOAD Aug 13 00:52:13.391000 audit: BPF prog-id=28 op=LOAD Aug 13 00:52:13.391000 audit: BPF prog-id=18 op=UNLOAD Aug 13 00:52:13.391000 audit: BPF prog-id=19 op=UNLOAD Aug 13 00:52:13.391000 audit: BPF prog-id=29 op=LOAD Aug 13 00:52:13.391000 audit: BPF prog-id=21 op=UNLOAD Aug 13 00:52:13.392000 audit: BPF prog-id=30 op=LOAD Aug 13 00:52:13.392000 audit: BPF prog-id=31 op=LOAD Aug 13 00:52:13.392000 audit: BPF prog-id=22 op=UNLOAD Aug 13 00:52:13.392000 audit: BPF prog-id=23 op=UNLOAD Aug 13 00:52:13.394000 audit: BPF prog-id=32 op=LOAD Aug 13 00:52:13.394000 audit: BPF prog-id=20 op=UNLOAD Aug 13 00:52:13.407362 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:52:13.407630 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:52:13.409262 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:52:13.411822 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:52:13.414678 systemd[1]: Starting modprobe@loop.service... Aug 13 00:52:13.415580 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:52:13.415786 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:52:13.416015 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:52:13.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.417679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:52:13.417894 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:52:13.421838 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:52:13.421997 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:52:13.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.423433 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:52:13.423542 systemd[1]: Finished modprobe@loop.service. Aug 13 00:52:13.428432 systemd[1]: Finished ensure-sysext.service. Aug 13 00:52:13.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.430301 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:52:13.430589 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:52:13.431599 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:52:13.433791 systemd[1]: Starting modprobe@drm.service... Aug 13 00:52:13.435917 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:52:13.439411 systemd[1]: Starting modprobe@loop.service... Aug 13 00:52:13.440542 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:52:13.440613 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:52:13.440761 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:52:13.441405 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:52:13.441590 systemd[1]: Finished modprobe@drm.service. Aug 13 00:52:13.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:13.442798 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:52:13.442932 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:52:13.443323 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:52:13.446366 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:52:13.446513 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:52:13.447844 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:52:13.447961 systemd[1]: Finished modprobe@loop.service. Aug 13 00:52:13.448524 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:52:13.623548 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:52:13.915815 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:52:14.064359 systemd-fsck[1275]: fsck.fat 4.2 (2021-01-31) Aug 13 00:52:14.064359 systemd-fsck[1275]: /dev/sda1: 789 files, 119324/258078 clusters Aug 13 00:52:14.066584 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:52:14.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:14.071811 systemd[1]: Mounting boot.mount... Aug 13 00:52:14.087733 systemd[1]: Mounted boot.mount. Aug 13 00:52:14.103911 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:52:14.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:14.243621 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:52:14.562099 systemd-networkd[1202]: eth0: Gained IPv6LL Aug 13 00:52:14.567987 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:52:14.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:17.824486 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:52:17.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:17.828408 systemd[1]: Starting audit-rules.service... Aug 13 00:52:17.830050 kernel: kauditd_printk_skb: 39 callbacks suppressed Aug 13 00:52:17.830116 kernel: audit: type=1130 audit(1755046337.826:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:17.845073 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:52:17.848630 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:52:17.854123 systemd[1]: Starting systemd-resolved.service... Aug 13 00:52:17.864681 kernel: audit: type=1334 audit(1755046337.851:195): prog-id=33 op=LOAD Aug 13 00:52:17.864756 kernel: audit: type=1334 audit(1755046337.859:196): prog-id=34 op=LOAD Aug 13 00:52:17.851000 audit: BPF prog-id=33 op=LOAD Aug 13 00:52:17.859000 audit: BPF prog-id=34 op=LOAD Aug 13 00:52:17.865145 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:52:17.868434 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:52:17.889000 audit[1388]: SYSTEM_BOOT pid=1388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:52:17.905193 kernel: audit: type=1127 audit(1755046337.889:197): pid=1388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:52:17.906489 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:52:17.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:17.923966 kernel: audit: type=1130 audit(1755046337.907:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:17.929607 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:52:17.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:17.932058 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:52:17.944957 kernel: audit: type=1130 audit(1755046337.931:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:18.055789 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:52:18.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:18.058291 systemd[1]: Reached target time-set.target. Aug 13 00:52:18.070033 kernel: audit: type=1130 audit(1755046338.057:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:18.130607 systemd-resolved[1385]: Positive Trust Anchors: Aug 13 00:52:18.130630 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:52:18.130683 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:52:18.246899 systemd-timesyncd[1387]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Aug 13 00:52:18.247434 systemd-timesyncd[1387]: Initial clock synchronization to Wed 2025-08-13 00:52:18.247458 UTC. Aug 13 00:52:18.337198 systemd-resolved[1385]: Using system hostname 'ci-3510.3.8-a-4e9ab5f8c8'. Aug 13 00:52:18.339101 systemd[1]: Started systemd-resolved.service. Aug 13 00:52:18.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:18.341650 systemd[1]: Reached target network.target. Aug 13 00:52:18.354035 kernel: audit: type=1130 audit(1755046338.340:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:18.355102 systemd[1]: Reached target network-online.target. Aug 13 00:52:18.357376 systemd[1]: Reached target nss-lookup.target. Aug 13 00:52:18.359610 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:52:18.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:18.375989 kernel: audit: type=1130 audit(1755046338.361:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:18.437000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:52:18.438995 systemd[1]: Finished audit-rules.service. Aug 13 00:52:18.440265 augenrules[1403]: No rules Aug 13 00:52:18.448963 kernel: audit: type=1305 audit(1755046338.437:203): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:52:18.437000 audit[1403]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc8a74b3f0 a2=420 a3=0 items=0 ppid=1382 pid=1403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:18.437000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:52:24.523612 ldconfig[1267]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:52:24.534467 systemd[1]: Finished ldconfig.service. Aug 13 00:52:24.537805 systemd[1]: Starting systemd-update-done.service... Aug 13 00:52:24.562998 systemd[1]: Finished systemd-update-done.service. Aug 13 00:52:24.565176 systemd[1]: Reached target sysinit.target. Aug 13 00:52:24.567076 systemd[1]: Started motdgen.path. Aug 13 00:52:24.568613 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:52:24.571261 systemd[1]: Started logrotate.timer. Aug 13 00:52:24.572874 systemd[1]: Started mdadm.timer. Aug 13 00:52:24.574472 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:52:24.576537 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:52:24.576576 systemd[1]: Reached target paths.target. Aug 13 00:52:24.578249 systemd[1]: Reached target timers.target. Aug 13 00:52:24.580984 systemd[1]: Listening on dbus.socket. Aug 13 00:52:24.583769 systemd[1]: Starting docker.socket... Aug 13 00:52:24.631518 systemd[1]: Listening on sshd.socket. Aug 13 00:52:24.633626 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:52:24.634244 systemd[1]: Listening on docker.socket. Aug 13 00:52:24.636250 systemd[1]: Reached target sockets.target. Aug 13 00:52:24.638336 systemd[1]: Reached target basic.target. Aug 13 00:52:24.640240 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:52:24.640277 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:52:24.641368 systemd[1]: Starting containerd.service... Aug 13 00:52:24.644289 systemd[1]: Starting dbus.service... Aug 13 00:52:24.646813 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:52:24.649821 systemd[1]: Starting extend-filesystems.service... Aug 13 00:52:24.651726 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:52:24.668211 systemd[1]: Starting kubelet.service... Aug 13 00:52:24.670897 systemd[1]: Starting motdgen.service... Aug 13 00:52:24.673530 systemd[1]: Started nvidia.service. Aug 13 00:52:24.676655 systemd[1]: Starting prepare-helm.service... Aug 13 00:52:24.679360 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:52:24.682207 systemd[1]: Starting sshd-keygen.service... Aug 13 00:52:24.687305 systemd[1]: Starting systemd-logind.service... Aug 13 00:52:24.690384 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:52:24.690492 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:52:24.691063 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:52:24.691926 systemd[1]: Starting update-engine.service... Aug 13 00:52:24.694840 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:52:24.708756 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:52:24.709011 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:52:24.757138 jq[1413]: false Aug 13 00:52:24.757413 jq[1425]: true Aug 13 00:52:24.759643 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:52:24.759907 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:52:24.768091 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:52:24.768275 systemd[1]: Finished motdgen.service. Aug 13 00:52:24.777493 extend-filesystems[1414]: Found loop1 Aug 13 00:52:24.777493 extend-filesystems[1414]: Found sda Aug 13 00:52:24.781293 extend-filesystems[1414]: Found sda1 Aug 13 00:52:24.781293 extend-filesystems[1414]: Found sda2 Aug 13 00:52:24.781293 extend-filesystems[1414]: Found sda3 Aug 13 00:52:24.781293 extend-filesystems[1414]: Found usr Aug 13 00:52:24.781293 extend-filesystems[1414]: Found sda4 Aug 13 00:52:24.781293 extend-filesystems[1414]: Found sda6 Aug 13 00:52:24.781293 extend-filesystems[1414]: Found sda7 Aug 13 00:52:24.781293 extend-filesystems[1414]: Found sda9 Aug 13 00:52:24.781293 extend-filesystems[1414]: Checking size of /dev/sda9 Aug 13 00:52:24.802018 jq[1441]: true Aug 13 00:52:24.850284 env[1437]: time="2025-08-13T00:52:24.850232075Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:52:24.867602 tar[1429]: linux-amd64/LICENSE Aug 13 00:52:24.867899 tar[1429]: linux-amd64/helm Aug 13 00:52:24.902888 systemd-logind[1423]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:52:24.906332 systemd-logind[1423]: New seat seat0. Aug 13 00:52:24.934784 extend-filesystems[1414]: Old size kept for /dev/sda9 Aug 13 00:52:24.937286 extend-filesystems[1414]: Found sr0 Aug 13 00:52:24.937291 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:52:24.937472 systemd[1]: Finished extend-filesystems.service. Aug 13 00:52:24.962697 env[1437]: time="2025-08-13T00:52:24.962657474Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:52:24.962839 env[1437]: time="2025-08-13T00:52:24.962816179Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:52:24.967970 env[1437]: time="2025-08-13T00:52:24.967921938Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:52:24.967970 env[1437]: time="2025-08-13T00:52:24.967969639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:52:24.969492 env[1437]: time="2025-08-13T00:52:24.969458986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:52:24.969565 env[1437]: time="2025-08-13T00:52:24.969493187Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:52:24.969565 env[1437]: time="2025-08-13T00:52:24.969510687Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:52:24.969565 env[1437]: time="2025-08-13T00:52:24.969523588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:52:24.969682 env[1437]: time="2025-08-13T00:52:24.969621491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:52:24.969911 env[1437]: time="2025-08-13T00:52:24.969885599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:52:25.004172 env[1437]: time="2025-08-13T00:52:24.970155407Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:52:25.004172 env[1437]: time="2025-08-13T00:52:24.975624778Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:52:25.009325 env[1437]: time="2025-08-13T00:52:25.009280909Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:52:25.009429 env[1437]: time="2025-08-13T00:52:25.009352911Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:52:25.020951 bash[1461]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:52:25.021316 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:52:25.033597 env[1437]: time="2025-08-13T00:52:25.033109604Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:52:25.033597 env[1437]: time="2025-08-13T00:52:25.033154905Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:52:25.033597 env[1437]: time="2025-08-13T00:52:25.033174606Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:52:25.033597 env[1437]: time="2025-08-13T00:52:25.033220107Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:52:25.033597 env[1437]: time="2025-08-13T00:52:25.033242008Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:52:25.033597 env[1437]: time="2025-08-13T00:52:25.033262609Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:52:25.033597 env[1437]: time="2025-08-13T00:52:25.033283509Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:52:25.033597 env[1437]: time="2025-08-13T00:52:25.033302310Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:52:25.033597 env[1437]: time="2025-08-13T00:52:25.033320010Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:52:25.033597 env[1437]: time="2025-08-13T00:52:25.033339211Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:52:25.033597 env[1437]: time="2025-08-13T00:52:25.033359011Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:52:25.033597 env[1437]: time="2025-08-13T00:52:25.033377212Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:52:25.033597 env[1437]: time="2025-08-13T00:52:25.033499715Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:52:25.036122 env[1437]: time="2025-08-13T00:52:25.034458943Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:52:25.036122 env[1437]: time="2025-08-13T00:52:25.034794553Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:52:25.036122 env[1437]: time="2025-08-13T00:52:25.034829854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:52:25.036122 env[1437]: time="2025-08-13T00:52:25.034850255Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:52:25.036122 env[1437]: time="2025-08-13T00:52:25.034907056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:52:25.036122 env[1437]: time="2025-08-13T00:52:25.034925157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:52:25.036122 env[1437]: time="2025-08-13T00:52:25.034957158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:52:25.036122 env[1437]: time="2025-08-13T00:52:25.034974058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:52:25.036122 env[1437]: time="2025-08-13T00:52:25.034990459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:52:25.036122 env[1437]: time="2025-08-13T00:52:25.035008659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:52:25.036122 env[1437]: time="2025-08-13T00:52:25.035024860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:52:25.036122 env[1437]: time="2025-08-13T00:52:25.035040460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:52:25.036122 env[1437]: time="2025-08-13T00:52:25.035059461Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:52:25.036122 env[1437]: time="2025-08-13T00:52:25.035204965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:52:25.036122 env[1437]: time="2025-08-13T00:52:25.035223766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:52:25.036623 env[1437]: time="2025-08-13T00:52:25.035240566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:52:25.036623 env[1437]: time="2025-08-13T00:52:25.035255067Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:52:25.036623 env[1437]: time="2025-08-13T00:52:25.035274567Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:52:25.036623 env[1437]: time="2025-08-13T00:52:25.035288168Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:52:25.036623 env[1437]: time="2025-08-13T00:52:25.035312968Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:52:25.036623 env[1437]: time="2025-08-13T00:52:25.035361070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:52:25.036850 env[1437]: time="2025-08-13T00:52:25.035608077Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:52:25.036850 env[1437]: time="2025-08-13T00:52:25.035682879Z" level=info msg="Connect containerd service" Aug 13 00:52:25.036850 env[1437]: time="2025-08-13T00:52:25.035723980Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:52:25.072163 env[1437]: time="2025-08-13T00:52:25.037521133Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:52:25.072163 env[1437]: time="2025-08-13T00:52:25.038460860Z" level=info msg="Start subscribing containerd event" Aug 13 00:52:25.072163 env[1437]: time="2025-08-13T00:52:25.038520462Z" level=info msg="Start recovering state" Aug 13 00:52:25.072163 env[1437]: time="2025-08-13T00:52:25.038599264Z" level=info msg="Start event monitor" Aug 13 00:52:25.072163 env[1437]: time="2025-08-13T00:52:25.038611965Z" level=info msg="Start snapshots syncer" Aug 13 00:52:25.072163 env[1437]: time="2025-08-13T00:52:25.038625365Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:52:25.072163 env[1437]: time="2025-08-13T00:52:25.038635965Z" level=info msg="Start streaming server" Aug 13 00:52:25.072163 env[1437]: time="2025-08-13T00:52:25.037807841Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:52:25.072163 env[1437]: time="2025-08-13T00:52:25.038866172Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:52:25.072163 env[1437]: time="2025-08-13T00:52:25.040837330Z" level=info msg="containerd successfully booted in 0.191877s" Aug 13 00:52:25.039006 systemd[1]: Started containerd.service. Aug 13 00:52:25.141675 systemd[1]: nvidia.service: Deactivated successfully. Aug 13 00:52:25.550209 dbus-daemon[1412]: [system] SELinux support is enabled Aug 13 00:52:25.550429 systemd[1]: Started dbus.service. Aug 13 00:52:25.554848 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:52:25.554886 systemd[1]: Reached target system-config.target. Aug 13 00:52:25.557049 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:52:25.557076 systemd[1]: Reached target user-config.target. Aug 13 00:52:25.560630 systemd[1]: Started systemd-logind.service. Aug 13 00:52:25.560773 dbus-daemon[1412]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:52:25.855768 update_engine[1424]: I0813 00:52:25.839807 1424 main.cc:92] Flatcar Update Engine starting Aug 13 00:52:25.912421 systemd[1]: Started update-engine.service. Aug 13 00:52:25.914441 update_engine[1424]: I0813 00:52:25.913671 1424 update_check_scheduler.cc:74] Next update check in 5m39s Aug 13 00:52:25.917118 systemd[1]: Started locksmithd.service. Aug 13 00:52:25.923558 tar[1429]: linux-amd64/README.md Aug 13 00:52:25.929000 systemd[1]: Finished prepare-helm.service. Aug 13 00:52:26.430507 systemd[1]: Started kubelet.service. Aug 13 00:52:26.591150 sshd_keygen[1434]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:52:26.618997 systemd[1]: Finished sshd-keygen.service. Aug 13 00:52:26.622820 systemd[1]: Starting issuegen.service... Aug 13 00:52:26.626396 systemd[1]: Started waagent.service. Aug 13 00:52:26.634588 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:52:26.634771 systemd[1]: Finished issuegen.service. Aug 13 00:52:26.637934 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:52:26.663063 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:52:26.667184 systemd[1]: Started getty@tty1.service. Aug 13 00:52:26.670277 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 00:52:26.672549 systemd[1]: Reached target getty.target. Aug 13 00:52:26.674505 systemd[1]: Reached target multi-user.target. Aug 13 00:52:26.677809 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:52:26.688750 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:52:26.688891 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:52:26.691277 systemd[1]: Startup finished in 1.065s (firmware) + 31.952s (loader) + 858ms (kernel) + 15.335s (initrd) + 38.884s (userspace) = 1min 28.096s. Aug 13 00:52:27.077166 kubelet[1515]: E0813 00:52:27.077116 1515 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:52:27.078881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:52:27.079061 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:52:27.079370 systemd[1]: kubelet.service: Consumed 1.140s CPU time. Aug 13 00:52:27.468379 login[1539]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Aug 13 00:52:27.500781 login[1538]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:52:27.566238 locksmithd[1511]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:52:27.660894 systemd[1]: Created slice user-500.slice. Aug 13 00:52:27.662836 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:52:27.665434 systemd-logind[1423]: New session 2 of user core. Aug 13 00:52:27.701581 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:52:27.703349 systemd[1]: Starting user@500.service... Aug 13 00:52:27.740793 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:52:28.170603 systemd[1542]: Queued start job for default target default.target. Aug 13 00:52:28.171373 systemd[1542]: Reached target paths.target. Aug 13 00:52:28.171410 systemd[1542]: Reached target sockets.target. Aug 13 00:52:28.171433 systemd[1542]: Reached target timers.target. Aug 13 00:52:28.171453 systemd[1542]: Reached target basic.target. Aug 13 00:52:28.171600 systemd[1]: Started user@500.service. Aug 13 00:52:28.173109 systemd[1]: Started session-2.scope. Aug 13 00:52:28.173784 systemd[1542]: Reached target default.target. Aug 13 00:52:28.174046 systemd[1542]: Startup finished in 425ms. Aug 13 00:52:28.470567 login[1539]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:52:28.476726 systemd[1]: Started session-1.scope. Aug 13 00:52:28.477248 systemd-logind[1423]: New session 1 of user core. Aug 13 00:52:37.286306 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:52:37.286634 systemd[1]: Stopped kubelet.service. Aug 13 00:52:37.286697 systemd[1]: kubelet.service: Consumed 1.140s CPU time. Aug 13 00:52:37.288736 systemd[1]: Starting kubelet.service... Aug 13 00:52:37.820152 waagent[1533]: 2025-08-13T00:52:37.820038Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Aug 13 00:52:37.916336 waagent[1533]: 2025-08-13T00:52:37.914075Z INFO Daemon Daemon OS: flatcar 3510.3.8 Aug 13 00:52:37.917720 waagent[1533]: 2025-08-13T00:52:37.917636Z INFO Daemon Daemon Python: 3.9.16 Aug 13 00:52:37.920843 waagent[1533]: 2025-08-13T00:52:37.920752Z INFO Daemon Daemon Run daemon Aug 13 00:52:37.928997 waagent[1533]: 2025-08-13T00:52:37.921400Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Aug 13 00:52:37.970160 waagent[1533]: 2025-08-13T00:52:37.970018Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Aug 13 00:52:37.976813 waagent[1533]: 2025-08-13T00:52:37.976694Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 00:52:37.998274 waagent[1533]: 2025-08-13T00:52:37.977145Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 00:52:37.998274 waagent[1533]: 2025-08-13T00:52:37.977916Z INFO Daemon Daemon Using waagent for provisioning Aug 13 00:52:37.998274 waagent[1533]: 2025-08-13T00:52:37.978955Z INFO Daemon Daemon Activate resource disk Aug 13 00:52:37.998274 waagent[1533]: 2025-08-13T00:52:37.979510Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 13 00:52:37.998274 waagent[1533]: 2025-08-13T00:52:37.987429Z INFO Daemon Daemon Found device: None Aug 13 00:52:37.998274 waagent[1533]: 2025-08-13T00:52:37.988178Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 13 00:52:37.998274 waagent[1533]: 2025-08-13T00:52:37.989126Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 13 00:52:37.998274 waagent[1533]: 2025-08-13T00:52:37.990720Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:52:37.998274 waagent[1533]: 2025-08-13T00:52:37.991415Z INFO Daemon Daemon Running default provisioning handler Aug 13 00:52:38.004868 waagent[1533]: 2025-08-13T00:52:38.004746Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Aug 13 00:52:38.013689 waagent[1533]: 2025-08-13T00:52:38.013001Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 00:52:38.016907 systemd[1]: Started kubelet.service. Aug 13 00:52:38.017513 waagent[1533]: 2025-08-13T00:52:38.017314Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 00:52:38.019917 waagent[1533]: 2025-08-13T00:52:38.019836Z INFO Daemon Daemon Copying ovf-env.xml Aug 13 00:52:38.081659 kubelet[1572]: E0813 00:52:38.081577 1572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:52:38.084575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:52:38.084739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:52:38.190965 waagent[1533]: 2025-08-13T00:52:38.189518Z INFO Daemon Daemon Successfully mounted dvd Aug 13 00:52:38.281841 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 13 00:52:38.319611 waagent[1533]: 2025-08-13T00:52:38.319440Z INFO Daemon Daemon Detect protocol endpoint Aug 13 00:52:38.332572 waagent[1533]: 2025-08-13T00:52:38.320036Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:52:38.332572 waagent[1533]: 2025-08-13T00:52:38.321015Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 13 00:52:38.332572 waagent[1533]: 2025-08-13T00:52:38.321761Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 13 00:52:38.332572 waagent[1533]: 2025-08-13T00:52:38.323084Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 13 00:52:38.332572 waagent[1533]: 2025-08-13T00:52:38.323705Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 13 00:52:38.489451 waagent[1533]: 2025-08-13T00:52:38.489373Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 13 00:52:38.496451 waagent[1533]: 2025-08-13T00:52:38.490374Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 13 00:52:38.496451 waagent[1533]: 2025-08-13T00:52:38.491013Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 13 00:52:38.944683 waagent[1533]: 2025-08-13T00:52:38.944533Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 13 00:52:38.956547 waagent[1533]: 2025-08-13T00:52:38.956452Z INFO Daemon Daemon Forcing an update of the goal state.. Aug 13 00:52:38.961098 waagent[1533]: 2025-08-13T00:52:38.956799Z INFO Daemon Daemon Fetching goal state [incarnation 1] Aug 13 00:52:39.035541 waagent[1533]: 2025-08-13T00:52:39.035410Z INFO Daemon Daemon Found private key matching thumbprint CAD99EDD65E805AC8A0D5897A221EFD14223DC18 Aug 13 00:52:39.040806 waagent[1533]: 2025-08-13T00:52:39.035976Z INFO Daemon Daemon Fetch goal state completed Aug 13 00:52:39.059392 waagent[1533]: 2025-08-13T00:52:39.059329Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: cbb9ef40-b57b-48b4-b9a5-1f042390f7fa New eTag: 12693351333131019941] Aug 13 00:52:39.066993 waagent[1533]: 2025-08-13T00:52:39.060026Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Aug 13 00:52:39.072363 waagent[1533]: 2025-08-13T00:52:39.072305Z INFO Daemon Daemon Starting provisioning Aug 13 00:52:39.079128 waagent[1533]: 2025-08-13T00:52:39.072737Z INFO Daemon Daemon Handle ovf-env.xml. Aug 13 00:52:39.079128 waagent[1533]: 2025-08-13T00:52:39.073671Z INFO Daemon Daemon Set hostname [ci-3510.3.8-a-4e9ab5f8c8] Aug 13 00:52:39.093600 waagent[1533]: 2025-08-13T00:52:39.093480Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-a-4e9ab5f8c8] Aug 13 00:52:39.101044 waagent[1533]: 2025-08-13T00:52:39.095087Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 13 00:52:39.101044 waagent[1533]: 2025-08-13T00:52:39.096358Z INFO Daemon Daemon Primary interface is [eth0] Aug 13 00:52:39.110365 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Aug 13 00:52:39.110619 systemd[1]: Stopped systemd-networkd-wait-online.service. Aug 13 00:52:39.110693 systemd[1]: Stopping systemd-networkd-wait-online.service... Aug 13 00:52:39.111065 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:52:39.116982 systemd-networkd[1202]: eth0: DHCPv6 lease lost Aug 13 00:52:39.118297 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:52:39.118494 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:52:39.120816 systemd[1]: Starting systemd-networkd.service... Aug 13 00:52:39.152993 systemd-networkd[1594]: enP60014s1: Link UP Aug 13 00:52:39.153003 systemd-networkd[1594]: enP60014s1: Gained carrier Aug 13 00:52:39.154324 systemd-networkd[1594]: eth0: Link UP Aug 13 00:52:39.154334 systemd-networkd[1594]: eth0: Gained carrier Aug 13 00:52:39.154767 systemd-networkd[1594]: lo: Link UP Aug 13 00:52:39.154777 systemd-networkd[1594]: lo: Gained carrier Aug 13 00:52:39.155211 systemd-networkd[1594]: eth0: Gained IPv6LL Aug 13 00:52:39.155492 systemd-networkd[1594]: Enumeration completed Aug 13 00:52:39.155591 systemd[1]: Started systemd-networkd.service. Aug 13 00:52:39.158003 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:52:39.160927 waagent[1533]: 2025-08-13T00:52:39.160745Z INFO Daemon Daemon Create user account if not exists Aug 13 00:52:39.163923 waagent[1533]: 2025-08-13T00:52:39.163824Z INFO Daemon Daemon User core already exists, skip useradd Aug 13 00:52:39.165646 systemd-networkd[1594]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:52:39.166693 waagent[1533]: 2025-08-13T00:52:39.166623Z INFO Daemon Daemon Configure sudoer Aug 13 00:52:39.186210 waagent[1533]: 2025-08-13T00:52:39.186139Z INFO Daemon Daemon Configure sshd Aug 13 00:52:39.188384 waagent[1533]: 2025-08-13T00:52:39.188317Z INFO Daemon Daemon Deploy ssh public key. Aug 13 00:52:39.202001 systemd-networkd[1594]: eth0: DHCPv4 address 10.200.4.32/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 00:52:39.205080 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:52:40.340460 waagent[1533]: 2025-08-13T00:52:40.340368Z INFO Daemon Daemon Provisioning complete Aug 13 00:52:40.353901 waagent[1533]: 2025-08-13T00:52:40.353812Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 13 00:52:40.360368 waagent[1533]: 2025-08-13T00:52:40.354325Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 13 00:52:40.360368 waagent[1533]: 2025-08-13T00:52:40.356112Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Aug 13 00:52:40.622581 waagent[1600]: 2025-08-13T00:52:40.622412Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Aug 13 00:52:40.623313 waagent[1600]: 2025-08-13T00:52:40.623245Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:52:40.623457 waagent[1600]: 2025-08-13T00:52:40.623400Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:52:40.634087 waagent[1600]: 2025-08-13T00:52:40.634017Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Aug 13 00:52:40.634248 waagent[1600]: 2025-08-13T00:52:40.634195Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Aug 13 00:52:40.684494 waagent[1600]: 2025-08-13T00:52:40.684377Z INFO ExtHandler ExtHandler Found private key matching thumbprint CAD99EDD65E805AC8A0D5897A221EFD14223DC18 Aug 13 00:52:40.684787 waagent[1600]: 2025-08-13T00:52:40.684728Z INFO ExtHandler ExtHandler Fetch goal state completed Aug 13 00:52:40.698198 waagent[1600]: 2025-08-13T00:52:40.698139Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: e9bb10c5-66c5-4796-ad1e-63d3b0f33c20 New eTag: 12693351333131019941] Aug 13 00:52:40.698694 waagent[1600]: 2025-08-13T00:52:40.698637Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Aug 13 00:52:40.832022 waagent[1600]: 2025-08-13T00:52:40.831849Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 00:52:40.856827 waagent[1600]: 2025-08-13T00:52:40.856740Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1600 Aug 13 00:52:40.860280 waagent[1600]: 2025-08-13T00:52:40.860208Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 00:52:40.861444 waagent[1600]: 2025-08-13T00:52:40.861381Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 13 00:52:40.994992 waagent[1600]: 2025-08-13T00:52:40.994841Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 00:52:40.995443 waagent[1600]: 2025-08-13T00:52:40.995365Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 00:52:41.004156 waagent[1600]: 2025-08-13T00:52:41.004101Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 00:52:41.004629 waagent[1600]: 2025-08-13T00:52:41.004566Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Aug 13 00:52:41.005702 waagent[1600]: 2025-08-13T00:52:41.005635Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Aug 13 00:52:41.007018 waagent[1600]: 2025-08-13T00:52:41.006957Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 00:52:41.007417 waagent[1600]: 2025-08-13T00:52:41.007362Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:52:41.007572 waagent[1600]: 2025-08-13T00:52:41.007522Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:52:41.008107 waagent[1600]: 2025-08-13T00:52:41.008048Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 00:52:41.008512 waagent[1600]: 2025-08-13T00:52:41.008457Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 00:52:41.009299 waagent[1600]: 2025-08-13T00:52:41.009239Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 00:52:41.009512 waagent[1600]: 2025-08-13T00:52:41.009446Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 00:52:41.009638 waagent[1600]: 2025-08-13T00:52:41.009581Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 00:52:41.009638 waagent[1600]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 00:52:41.009638 waagent[1600]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 00:52:41.009638 waagent[1600]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 00:52:41.009638 waagent[1600]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:52:41.009638 waagent[1600]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:52:41.009638 waagent[1600]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:52:41.010200 waagent[1600]: 2025-08-13T00:52:41.010145Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:52:41.013474 waagent[1600]: 2025-08-13T00:52:41.013315Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:52:41.014620 waagent[1600]: 2025-08-13T00:52:41.014558Z INFO EnvHandler ExtHandler Configure routes Aug 13 00:52:41.015033 waagent[1600]: 2025-08-13T00:52:41.014972Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 00:52:41.015110 waagent[1600]: 2025-08-13T00:52:41.015058Z INFO EnvHandler ExtHandler Gateway:None Aug 13 00:52:41.015328 waagent[1600]: 2025-08-13T00:52:41.015271Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 00:52:41.015572 waagent[1600]: 2025-08-13T00:52:41.015522Z INFO EnvHandler ExtHandler Routes:None Aug 13 00:52:41.016218 waagent[1600]: 2025-08-13T00:52:41.016165Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 00:52:41.028283 waagent[1600]: 2025-08-13T00:52:41.028223Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Aug 13 00:52:41.029716 waagent[1600]: 2025-08-13T00:52:41.029669Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Aug 13 00:52:41.030650 waagent[1600]: 2025-08-13T00:52:41.030593Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Aug 13 00:52:41.048370 waagent[1600]: 2025-08-13T00:52:41.048313Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Aug 13 00:52:41.088956 waagent[1600]: 2025-08-13T00:52:41.088827Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1594' Aug 13 00:52:41.205593 waagent[1600]: 2025-08-13T00:52:41.205460Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 00:52:41.205593 waagent[1600]: Executing ['ip', '-a', '-o', 'link']: Aug 13 00:52:41.205593 waagent[1600]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 00:52:41.205593 waagent[1600]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:04:72:ee brd ff:ff:ff:ff:ff:ff Aug 13 00:52:41.205593 waagent[1600]: 3: enP60014s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:04:72:ee brd ff:ff:ff:ff:ff:ff\ altname enP60014p0s2 Aug 13 00:52:41.205593 waagent[1600]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 00:52:41.205593 waagent[1600]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 00:52:41.205593 waagent[1600]: 2: eth0 inet 10.200.4.32/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 00:52:41.205593 waagent[1600]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 00:52:41.205593 waagent[1600]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Aug 13 00:52:41.205593 waagent[1600]: 2: eth0 inet6 fe80::7e1e:52ff:fe04:72ee/64 scope link \ valid_lft forever preferred_lft forever Aug 13 00:52:41.418003 waagent[1600]: 2025-08-13T00:52:41.417918Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Aug 13 00:52:42.361704 waagent[1533]: 2025-08-13T00:52:42.361409Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Aug 13 00:52:42.367459 waagent[1533]: 2025-08-13T00:52:42.367379Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Aug 13 00:52:43.523154 waagent[1628]: 2025-08-13T00:52:43.523052Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Aug 13 00:52:43.523868 waagent[1628]: 2025-08-13T00:52:43.523798Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Aug 13 00:52:43.524048 waagent[1628]: 2025-08-13T00:52:43.523996Z INFO ExtHandler ExtHandler Python: 3.9.16 Aug 13 00:52:43.524209 waagent[1628]: 2025-08-13T00:52:43.524160Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Aug 13 00:52:43.539284 waagent[1628]: 2025-08-13T00:52:43.539189Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 00:52:43.539692 waagent[1628]: 2025-08-13T00:52:43.539635Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:52:43.539869 waagent[1628]: 2025-08-13T00:52:43.539820Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:52:43.540123 waagent[1628]: 2025-08-13T00:52:43.540071Z INFO ExtHandler ExtHandler Initializing the goal state... Aug 13 00:52:43.551557 waagent[1628]: 2025-08-13T00:52:43.551483Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 00:52:43.559259 waagent[1628]: 2025-08-13T00:52:43.559200Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Aug 13 00:52:43.560159 waagent[1628]: 2025-08-13T00:52:43.560100Z INFO ExtHandler Aug 13 00:52:43.560323 waagent[1628]: 2025-08-13T00:52:43.560272Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b2d63b97-0410-4902-90e4-5ea858bb6024 eTag: 12693351333131019941 source: Fabric] Aug 13 00:52:43.561037 waagent[1628]: 2025-08-13T00:52:43.560979Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 13 00:52:43.562157 waagent[1628]: 2025-08-13T00:52:43.562095Z INFO ExtHandler Aug 13 00:52:43.562307 waagent[1628]: 2025-08-13T00:52:43.562257Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 13 00:52:43.569099 waagent[1628]: 2025-08-13T00:52:43.569045Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 13 00:52:43.569556 waagent[1628]: 2025-08-13T00:52:43.569504Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Aug 13 00:52:43.600926 waagent[1628]: 2025-08-13T00:52:43.600864Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Aug 13 00:52:43.655633 waagent[1628]: 2025-08-13T00:52:43.655520Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CAD99EDD65E805AC8A0D5897A221EFD14223DC18', 'hasPrivateKey': True} Aug 13 00:52:43.656812 waagent[1628]: 2025-08-13T00:52:43.656743Z INFO ExtHandler Fetch goal state from WireServer completed Aug 13 00:52:43.657662 waagent[1628]: 2025-08-13T00:52:43.657601Z INFO ExtHandler ExtHandler Goal state initialization completed. Aug 13 00:52:43.675223 waagent[1628]: 2025-08-13T00:52:43.675129Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Aug 13 00:52:43.682895 waagent[1628]: 2025-08-13T00:52:43.682803Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Aug 13 00:52:43.686305 waagent[1628]: 2025-08-13T00:52:43.686214Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Aug 13 00:52:43.686522 waagent[1628]: 2025-08-13T00:52:43.686468Z INFO ExtHandler ExtHandler Checking state of the firewall Aug 13 00:52:43.848132 waagent[1628]: 2025-08-13T00:52:43.848011Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: Aug 13 00:52:43.848132 waagent[1628]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:52:43.848132 waagent[1628]: pkts bytes target prot opt in out source destination Aug 13 00:52:43.848132 waagent[1628]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:52:43.848132 waagent[1628]: pkts bytes target prot opt in out source destination Aug 13 00:52:43.848132 waagent[1628]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:52:43.848132 waagent[1628]: pkts bytes target prot opt in out source destination Aug 13 00:52:43.848132 waagent[1628]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 00:52:43.848132 waagent[1628]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:52:43.848132 waagent[1628]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:52:43.849284 waagent[1628]: 2025-08-13T00:52:43.849215Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Aug 13 00:52:43.851889 waagent[1628]: 2025-08-13T00:52:43.851788Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Aug 13 00:52:43.852184 waagent[1628]: 2025-08-13T00:52:43.852129Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 00:52:43.852582 waagent[1628]: 2025-08-13T00:52:43.852524Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 00:52:43.860789 waagent[1628]: 2025-08-13T00:52:43.860726Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 00:52:43.861325 waagent[1628]: 2025-08-13T00:52:43.861264Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Aug 13 00:52:43.868673 waagent[1628]: 2025-08-13T00:52:43.868608Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1628 Aug 13 00:52:43.871722 waagent[1628]: 2025-08-13T00:52:43.871657Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 00:52:43.872509 waagent[1628]: 2025-08-13T00:52:43.872442Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Aug 13 00:52:43.873373 waagent[1628]: 2025-08-13T00:52:43.873313Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Aug 13 00:52:43.875877 waagent[1628]: 2025-08-13T00:52:43.875815Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Aug 13 00:52:43.876217 waagent[1628]: 2025-08-13T00:52:43.876162Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Aug 13 00:52:43.877867 waagent[1628]: 2025-08-13T00:52:43.877808Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 00:52:43.878316 waagent[1628]: 2025-08-13T00:52:43.878256Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:52:43.878485 waagent[1628]: 2025-08-13T00:52:43.878435Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:52:43.879021 waagent[1628]: 2025-08-13T00:52:43.878960Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 00:52:43.879767 waagent[1628]: 2025-08-13T00:52:43.879711Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 00:52:43.880463 waagent[1628]: 2025-08-13T00:52:43.880408Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 00:52:43.880463 waagent[1628]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 00:52:43.880463 waagent[1628]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 00:52:43.880463 waagent[1628]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 00:52:43.880463 waagent[1628]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:52:43.880463 waagent[1628]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:52:43.880463 waagent[1628]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:52:43.880756 waagent[1628]: 2025-08-13T00:52:43.880507Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:52:43.880756 waagent[1628]: 2025-08-13T00:52:43.880686Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:52:43.881134 waagent[1628]: 2025-08-13T00:52:43.881080Z INFO EnvHandler ExtHandler Configure routes Aug 13 00:52:43.882245 waagent[1628]: 2025-08-13T00:52:43.882202Z INFO EnvHandler ExtHandler Gateway:None Aug 13 00:52:43.882350 waagent[1628]: 2025-08-13T00:52:43.882128Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 00:52:43.882889 waagent[1628]: 2025-08-13T00:52:43.882768Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 00:52:43.885188 waagent[1628]: 2025-08-13T00:52:43.884907Z INFO EnvHandler ExtHandler Routes:None Aug 13 00:52:43.894728 waagent[1628]: 2025-08-13T00:52:43.894647Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 00:52:43.897435 waagent[1628]: 2025-08-13T00:52:43.894239Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 00:52:43.897435 waagent[1628]: 2025-08-13T00:52:43.897161Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 00:52:43.910920 waagent[1628]: 2025-08-13T00:52:43.910860Z INFO ExtHandler ExtHandler Downloading agent manifest Aug 13 00:52:43.915288 waagent[1628]: 2025-08-13T00:52:43.915228Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Aug 13 00:52:43.916153 waagent[1628]: 2025-08-13T00:52:43.916093Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 00:52:43.916153 waagent[1628]: Executing ['ip', '-a', '-o', 'link']: Aug 13 00:52:43.916153 waagent[1628]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 00:52:43.916153 waagent[1628]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:04:72:ee brd ff:ff:ff:ff:ff:ff Aug 13 00:52:43.916153 waagent[1628]: 3: enP60014s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:04:72:ee brd ff:ff:ff:ff:ff:ff\ altname enP60014p0s2 Aug 13 00:52:43.916153 waagent[1628]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 00:52:43.916153 waagent[1628]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 00:52:43.916153 waagent[1628]: 2: eth0 inet 10.200.4.32/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 00:52:43.916153 waagent[1628]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 00:52:43.916153 waagent[1628]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Aug 13 00:52:43.916153 waagent[1628]: 2: eth0 inet6 fe80::7e1e:52ff:fe04:72ee/64 scope link \ valid_lft forever preferred_lft forever Aug 13 00:52:43.934516 waagent[1628]: 2025-08-13T00:52:43.934451Z INFO ExtHandler ExtHandler Aug 13 00:52:43.934895 waagent[1628]: 2025-08-13T00:52:43.934843Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a6757911-b0bd-4688-ae79-dd00249cec3d correlation d845a22a-9576-49b3-9433-79f03007f876 created: 2025-08-13T00:50:45.764595Z] Aug 13 00:52:43.938970 waagent[1628]: 2025-08-13T00:52:43.938904Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 13 00:52:43.944434 waagent[1628]: 2025-08-13T00:52:43.944373Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 9 ms] Aug 13 00:52:43.969327 waagent[1628]: 2025-08-13T00:52:43.969188Z INFO ExtHandler ExtHandler Looking for existing remote access users. Aug 13 00:52:43.979545 waagent[1628]: 2025-08-13T00:52:43.979478Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6EF87515-B59C-43DA-B9F1-233A2C025CFF;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Aug 13 00:52:43.981102 waagent[1628]: 2025-08-13T00:52:43.981045Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Aug 13 00:52:48.286166 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:52:48.286468 systemd[1]: Stopped kubelet.service. Aug 13 00:52:48.288505 systemd[1]: Starting kubelet.service... Aug 13 00:52:48.629083 systemd[1]: Started kubelet.service. Aug 13 00:52:49.024075 kubelet[1673]: E0813 00:52:49.023967 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:52:49.025878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:52:49.026056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:52:59.036224 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:52:59.036526 systemd[1]: Stopped kubelet.service. Aug 13 00:52:59.038510 systemd[1]: Starting kubelet.service... Aug 13 00:52:59.380301 systemd[1]: Started kubelet.service. Aug 13 00:52:59.594295 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Aug 13 00:52:59.775116 kubelet[1683]: E0813 00:52:59.775012 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:52:59.776787 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:52:59.776969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:02.844590 systemd[1]: Created slice system-sshd.slice. Aug 13 00:53:02.846881 systemd[1]: Started sshd@0-10.200.4.32:22-10.200.16.10:59042.service. Aug 13 00:53:03.734247 sshd[1690]: Accepted publickey for core from 10.200.16.10 port 59042 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:53:03.735932 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:03.741245 systemd[1]: Started session-3.scope. Aug 13 00:53:03.741698 systemd-logind[1423]: New session 3 of user core. Aug 13 00:53:04.257305 systemd[1]: Started sshd@1-10.200.4.32:22-10.200.16.10:59046.service. Aug 13 00:53:04.844321 sshd[1695]: Accepted publickey for core from 10.200.16.10 port 59046 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:53:04.846049 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:04.851780 systemd[1]: Started session-4.scope. Aug 13 00:53:04.852541 systemd-logind[1423]: New session 4 of user core. Aug 13 00:53:05.268124 sshd[1695]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:05.271697 systemd[1]: sshd@1-10.200.4.32:22-10.200.16.10:59046.service: Deactivated successfully. Aug 13 00:53:05.272830 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:53:05.273665 systemd-logind[1423]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:53:05.274602 systemd-logind[1423]: Removed session 4. Aug 13 00:53:05.366260 systemd[1]: Started sshd@2-10.200.4.32:22-10.200.16.10:59058.service. Aug 13 00:53:05.952530 sshd[1701]: Accepted publickey for core from 10.200.16.10 port 59058 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:53:05.954250 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:05.960035 systemd[1]: Started session-5.scope. Aug 13 00:53:05.960623 systemd-logind[1423]: New session 5 of user core. Aug 13 00:53:06.376848 sshd[1701]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:06.379893 systemd[1]: sshd@2-10.200.4.32:22-10.200.16.10:59058.service: Deactivated successfully. Aug 13 00:53:06.380760 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:53:06.381380 systemd-logind[1423]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:53:06.382147 systemd-logind[1423]: Removed session 5. Aug 13 00:53:06.477250 systemd[1]: Started sshd@3-10.200.4.32:22-10.200.16.10:59062.service. Aug 13 00:53:07.070603 sshd[1707]: Accepted publickey for core from 10.200.16.10 port 59062 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:53:07.073307 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:07.078227 systemd[1]: Started session-6.scope. Aug 13 00:53:07.078810 systemd-logind[1423]: New session 6 of user core. Aug 13 00:53:07.503267 sshd[1707]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:07.506678 systemd[1]: sshd@3-10.200.4.32:22-10.200.16.10:59062.service: Deactivated successfully. Aug 13 00:53:07.507540 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:53:07.508169 systemd-logind[1423]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:53:07.508894 systemd-logind[1423]: Removed session 6. Aug 13 00:53:07.603110 systemd[1]: Started sshd@4-10.200.4.32:22-10.200.16.10:59076.service. Aug 13 00:53:08.195785 sshd[1713]: Accepted publickey for core from 10.200.16.10 port 59076 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:53:08.197468 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:08.202923 systemd[1]: Started session-7.scope. Aug 13 00:53:08.203391 systemd-logind[1423]: New session 7 of user core. Aug 13 00:53:08.935568 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:53:08.935963 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:53:08.975281 systemd[1]: Starting docker.service... Aug 13 00:53:09.033161 env[1726]: time="2025-08-13T00:53:09.033120415Z" level=info msg="Starting up" Aug 13 00:53:09.034633 env[1726]: time="2025-08-13T00:53:09.034610117Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:53:09.034752 env[1726]: time="2025-08-13T00:53:09.034739018Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:53:09.034830 env[1726]: time="2025-08-13T00:53:09.034815618Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:53:09.034879 env[1726]: time="2025-08-13T00:53:09.034870718Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:53:09.036572 env[1726]: time="2025-08-13T00:53:09.036554321Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:53:09.036650 env[1726]: time="2025-08-13T00:53:09.036640021Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:53:09.036707 env[1726]: time="2025-08-13T00:53:09.036696421Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:53:09.036750 env[1726]: time="2025-08-13T00:53:09.036742521Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:53:09.044753 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1312977567-merged.mount: Deactivated successfully. Aug 13 00:53:09.133401 env[1726]: time="2025-08-13T00:53:09.133360186Z" level=info msg="Loading containers: start." Aug 13 00:53:09.368089 kernel: Initializing XFRM netlink socket Aug 13 00:53:09.408196 env[1726]: time="2025-08-13T00:53:09.408147054Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:53:09.618540 systemd-networkd[1594]: docker0: Link UP Aug 13 00:53:09.643136 env[1726]: time="2025-08-13T00:53:09.643097355Z" level=info msg="Loading containers: done." Aug 13 00:53:09.661273 env[1726]: time="2025-08-13T00:53:09.661234286Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:53:09.661457 env[1726]: time="2025-08-13T00:53:09.661422986Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:53:09.661553 env[1726]: time="2025-08-13T00:53:09.661529786Z" level=info msg="Daemon has completed initialization" Aug 13 00:53:09.690703 systemd[1]: Started docker.service. Aug 13 00:53:09.700530 env[1726]: time="2025-08-13T00:53:09.700478753Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:53:09.786132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 00:53:09.786394 systemd[1]: Stopped kubelet.service. Aug 13 00:53:09.788406 systemd[1]: Starting kubelet.service... Aug 13 00:53:09.963208 systemd[1]: Started kubelet.service. Aug 13 00:53:10.542456 kubelet[1844]: E0813 00:53:10.542386 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:10.544172 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:10.544352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:11.614073 update_engine[1424]: I0813 00:53:11.614017 1424 update_attempter.cc:509] Updating boot flags... Aug 13 00:53:14.620197 env[1437]: time="2025-08-13T00:53:14.620151951Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 00:53:15.378440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983261593.mount: Deactivated successfully. Aug 13 00:53:17.005210 env[1437]: time="2025-08-13T00:53:17.005153668Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:17.011337 env[1437]: time="2025-08-13T00:53:17.011295674Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:17.016082 env[1437]: time="2025-08-13T00:53:17.016049579Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:17.019801 env[1437]: time="2025-08-13T00:53:17.019703983Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:17.020734 env[1437]: time="2025-08-13T00:53:17.020702784Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 00:53:17.021545 env[1437]: time="2025-08-13T00:53:17.021520285Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 00:53:18.729107 env[1437]: time="2025-08-13T00:53:18.729048976Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:18.734066 env[1437]: time="2025-08-13T00:53:18.734026181Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:18.738032 env[1437]: time="2025-08-13T00:53:18.737995785Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:18.741591 env[1437]: time="2025-08-13T00:53:18.741560688Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:18.742228 env[1437]: time="2025-08-13T00:53:18.742197189Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 00:53:18.742906 env[1437]: time="2025-08-13T00:53:18.742879989Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 00:53:20.148480 env[1437]: time="2025-08-13T00:53:20.148425453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:20.155019 env[1437]: time="2025-08-13T00:53:20.154975659Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:20.158960 env[1437]: time="2025-08-13T00:53:20.158907062Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:20.162789 env[1437]: time="2025-08-13T00:53:20.162760865Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:20.163543 env[1437]: time="2025-08-13T00:53:20.163506966Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 00:53:20.164272 env[1437]: time="2025-08-13T00:53:20.164245867Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 00:53:20.786138 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 13 00:53:20.786386 systemd[1]: Stopped kubelet.service. Aug 13 00:53:20.788608 systemd[1]: Starting kubelet.service... Aug 13 00:53:20.926955 systemd[1]: Started kubelet.service. Aug 13 00:53:21.726987 kubelet[1892]: E0813 00:53:21.726860 1892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:21.729419 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:21.729539 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:22.219722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2895414991.mount: Deactivated successfully. Aug 13 00:53:22.850705 env[1437]: time="2025-08-13T00:53:22.850648185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:22.856021 env[1437]: time="2025-08-13T00:53:22.855977421Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:22.858709 env[1437]: time="2025-08-13T00:53:22.858668439Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:22.861903 env[1437]: time="2025-08-13T00:53:22.861871861Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:22.862266 env[1437]: time="2025-08-13T00:53:22.862233963Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 00:53:22.862981 env[1437]: time="2025-08-13T00:53:22.862957068Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:53:23.504456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1496991256.mount: Deactivated successfully. Aug 13 00:53:24.736157 env[1437]: time="2025-08-13T00:53:24.736041970Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:24.742705 env[1437]: time="2025-08-13T00:53:24.742667220Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:24.746355 env[1437]: time="2025-08-13T00:53:24.746322414Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:24.750842 env[1437]: time="2025-08-13T00:53:24.750809551Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:24.751572 env[1437]: time="2025-08-13T00:53:24.751541390Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:53:24.752274 env[1437]: time="2025-08-13T00:53:24.752250628Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:53:25.663589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1120570599.mount: Deactivated successfully. Aug 13 00:53:25.682842 env[1437]: time="2025-08-13T00:53:25.682728888Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:25.688722 env[1437]: time="2025-08-13T00:53:25.688627291Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:25.692012 env[1437]: time="2025-08-13T00:53:25.691985064Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:25.700173 env[1437]: time="2025-08-13T00:53:25.700139284Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:25.700678 env[1437]: time="2025-08-13T00:53:25.700631509Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:53:25.701243 env[1437]: time="2025-08-13T00:53:25.701217839Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:53:26.344588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1643629768.mount: Deactivated successfully. Aug 13 00:53:28.864112 env[1437]: time="2025-08-13T00:53:28.864054557Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:28.868960 env[1437]: time="2025-08-13T00:53:28.868913587Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:28.872823 env[1437]: time="2025-08-13T00:53:28.872782871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:28.878780 env[1437]: time="2025-08-13T00:53:28.878739152Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:53:28.880897 env[1437]: time="2025-08-13T00:53:28.879955310Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:31.555728 systemd[1]: Stopped kubelet.service. Aug 13 00:53:31.558611 systemd[1]: Starting kubelet.service... Aug 13 00:53:31.581566 systemd[1]: Reloading. Aug 13 00:53:31.666151 /usr/lib/systemd/system-generators/torcx-generator[1942]: time="2025-08-13T00:53:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:53:31.679246 /usr/lib/systemd/system-generators/torcx-generator[1942]: time="2025-08-13T00:53:31Z" level=info msg="torcx already run" Aug 13 00:53:31.771057 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:53:31.771077 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:53:31.787269 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:53:31.906617 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:53:31.907037 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:53:31.907626 systemd[1]: Stopped kubelet.service. Aug 13 00:53:31.911215 systemd[1]: Starting kubelet.service... Aug 13 00:53:33.067998 systemd[1]: Started kubelet.service. Aug 13 00:53:33.853329 kubelet[2008]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:53:33.853736 kubelet[2008]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:53:33.853808 kubelet[2008]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:53:33.854064 kubelet[2008]: I0813 00:53:33.854012 2008 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:53:34.266588 kubelet[2008]: I0813 00:53:34.266480 2008 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:53:34.266588 kubelet[2008]: I0813 00:53:34.266510 2008 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:53:34.267103 kubelet[2008]: I0813 00:53:34.267071 2008 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:53:34.296073 kubelet[2008]: E0813 00:53:34.296034 2008 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:34.298326 kubelet[2008]: I0813 00:53:34.298296 2008 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:53:34.307067 kubelet[2008]: E0813 00:53:34.307036 2008 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:53:34.307067 kubelet[2008]: I0813 00:53:34.307062 2008 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:53:34.310492 kubelet[2008]: I0813 00:53:34.310472 2008 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:53:34.310753 kubelet[2008]: I0813 00:53:34.310717 2008 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:53:34.310924 kubelet[2008]: I0813 00:53:34.310751 2008 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-a-4e9ab5f8c8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:53:34.312083 kubelet[2008]: I0813 00:53:34.312062 2008 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:53:34.312158 kubelet[2008]: I0813 00:53:34.312088 2008 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:53:34.312235 kubelet[2008]: I0813 00:53:34.312217 2008 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:53:34.315816 kubelet[2008]: I0813 00:53:34.315795 2008 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:53:34.315909 kubelet[2008]: I0813 00:53:34.315827 2008 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:53:34.315909 kubelet[2008]: I0813 00:53:34.315851 2008 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:53:34.315909 kubelet[2008]: I0813 00:53:34.315866 2008 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:53:34.337109 kubelet[2008]: W0813 00:53:34.336899 2008 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-4e9ab5f8c8&limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Aug 13 00:53:34.337109 kubelet[2008]: E0813 00:53:34.336982 2008 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-4e9ab5f8c8&limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:34.337109 kubelet[2008]: W0813 00:53:34.337071 2008 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Aug 13 00:53:34.337288 kubelet[2008]: E0813 00:53:34.337112 2008 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:34.337288 kubelet[2008]: I0813 00:53:34.337183 2008 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:53:34.337684 kubelet[2008]: I0813 00:53:34.337662 2008 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:53:34.337754 kubelet[2008]: W0813 00:53:34.337730 2008 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:53:34.344342 kubelet[2008]: I0813 00:53:34.344317 2008 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:53:34.344428 kubelet[2008]: I0813 00:53:34.344354 2008 server.go:1287] "Started kubelet" Aug 13 00:53:34.354690 kubelet[2008]: E0813 00:53:34.353414 2008 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.32:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.32:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-a-4e9ab5f8c8.185b2d6a4b512492 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-a-4e9ab5f8c8,UID:ci-3510.3.8-a-4e9ab5f8c8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-a-4e9ab5f8c8,},FirstTimestamp:2025-08-13 00:53:34.344332434 +0000 UTC m=+1.271101796,LastTimestamp:2025-08-13 00:53:34.344332434 +0000 UTC m=+1.271101796,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-a-4e9ab5f8c8,}" Aug 13 00:53:34.355560 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 00:53:34.355703 kubelet[2008]: I0813 00:53:34.355684 2008 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:53:34.358246 kubelet[2008]: I0813 00:53:34.358214 2008 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:53:34.359253 kubelet[2008]: I0813 00:53:34.359232 2008 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:53:34.359751 kubelet[2008]: I0813 00:53:34.359731 2008 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:53:34.360540 kubelet[2008]: E0813 00:53:34.360514 2008 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-4e9ab5f8c8\" not found" Aug 13 00:53:34.361530 kubelet[2008]: I0813 00:53:34.361509 2008 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:53:34.361848 kubelet[2008]: I0813 00:53:34.361836 2008 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:53:34.363777 kubelet[2008]: I0813 00:53:34.363718 2008 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:53:34.364008 kubelet[2008]: I0813 00:53:34.363989 2008 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:53:34.364264 kubelet[2008]: I0813 00:53:34.364242 2008 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:53:34.366324 kubelet[2008]: W0813 00:53:34.365406 2008 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Aug 13 00:53:34.366324 kubelet[2008]: E0813 00:53:34.365467 2008 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:34.366324 kubelet[2008]: E0813 00:53:34.365554 2008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-4e9ab5f8c8?timeout=10s\": dial tcp 10.200.4.32:6443: connect: connection refused" interval="200ms" Aug 13 00:53:34.368723 kubelet[2008]: I0813 00:53:34.368709 2008 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:53:34.368828 kubelet[2008]: I0813 00:53:34.368817 2008 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:53:34.369029 kubelet[2008]: I0813 00:53:34.369009 2008 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:53:34.375661 kubelet[2008]: E0813 00:53:34.375632 2008 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:53:34.400986 kubelet[2008]: I0813 00:53:34.400970 2008 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:53:34.401102 kubelet[2008]: I0813 00:53:34.401093 2008 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:53:34.401169 kubelet[2008]: I0813 00:53:34.401162 2008 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:53:34.407931 kubelet[2008]: I0813 00:53:34.407915 2008 policy_none.go:49] "None policy: Start" Aug 13 00:53:34.408050 kubelet[2008]: I0813 00:53:34.407980 2008 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:53:34.408050 kubelet[2008]: I0813 00:53:34.407999 2008 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:53:34.416523 kubelet[2008]: I0813 00:53:34.416478 2008 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:53:34.419034 kubelet[2008]: I0813 00:53:34.419014 2008 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:53:34.419151 kubelet[2008]: I0813 00:53:34.419140 2008 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:53:34.419245 kubelet[2008]: I0813 00:53:34.419233 2008 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:53:34.419307 kubelet[2008]: I0813 00:53:34.419299 2008 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:53:34.419429 kubelet[2008]: E0813 00:53:34.419401 2008 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:53:34.420483 systemd[1]: Created slice kubepods.slice. Aug 13 00:53:34.422886 kubelet[2008]: W0813 00:53:34.422862 2008 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Aug 13 00:53:34.424517 kubelet[2008]: E0813 00:53:34.424494 2008 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:34.428696 systemd[1]: Created slice kubepods-burstable.slice. Aug 13 00:53:34.431660 systemd[1]: Created slice kubepods-besteffort.slice. Aug 13 00:53:34.439111 kubelet[2008]: I0813 00:53:34.439094 2008 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:53:34.439347 kubelet[2008]: I0813 00:53:34.439333 2008 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:53:34.439922 kubelet[2008]: I0813 00:53:34.439877 2008 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:53:34.440575 kubelet[2008]: I0813 00:53:34.440561 2008 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:53:34.442375 kubelet[2008]: E0813 00:53:34.442359 2008 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:53:34.442479 kubelet[2008]: E0813 00:53:34.442468 2008 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-a-4e9ab5f8c8\" not found" Aug 13 00:53:34.529785 systemd[1]: Created slice kubepods-burstable-pod54c70f937f4ee037adfaae29a3987a31.slice. Aug 13 00:53:34.538517 kubelet[2008]: E0813 00:53:34.538257 2008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-4e9ab5f8c8\" not found" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:34.540630 systemd[1]: Created slice kubepods-burstable-pod03237e9ef5cd3a6cd69672d5b1531cdf.slice. Aug 13 00:53:34.542260 kubelet[2008]: I0813 00:53:34.542240 2008 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:34.542733 kubelet[2008]: E0813 00:53:34.542709 2008 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.32:6443/api/v1/nodes\": dial tcp 10.200.4.32:6443: connect: connection refused" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:34.543070 kubelet[2008]: E0813 00:53:34.543050 2008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-4e9ab5f8c8\" not found" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:34.551811 systemd[1]: Created slice kubepods-burstable-podef4b33e87cb3ea80c95e2b747ef885c4.slice. Aug 13 00:53:34.553464 kubelet[2008]: E0813 00:53:34.553444 2008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-4e9ab5f8c8\" not found" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:34.563045 kubelet[2008]: I0813 00:53:34.563015 2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03237e9ef5cd3a6cd69672d5b1531cdf-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"03237e9ef5cd3a6cd69672d5b1531cdf\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:34.563135 kubelet[2008]: I0813 00:53:34.563046 2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03237e9ef5cd3a6cd69672d5b1531cdf-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"03237e9ef5cd3a6cd69672d5b1531cdf\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:34.563135 kubelet[2008]: I0813 00:53:34.563075 2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03237e9ef5cd3a6cd69672d5b1531cdf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"03237e9ef5cd3a6cd69672d5b1531cdf\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:34.563135 kubelet[2008]: I0813 00:53:34.563099 2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef4b33e87cb3ea80c95e2b747ef885c4-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"ef4b33e87cb3ea80c95e2b747ef885c4\") " pod="kube-system/kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:34.563135 kubelet[2008]: I0813 00:53:34.563124 2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54c70f937f4ee037adfaae29a3987a31-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"54c70f937f4ee037adfaae29a3987a31\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:34.563302 kubelet[2008]: I0813 00:53:34.563144 2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54c70f937f4ee037adfaae29a3987a31-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"54c70f937f4ee037adfaae29a3987a31\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:34.563302 kubelet[2008]: I0813 00:53:34.563167 2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54c70f937f4ee037adfaae29a3987a31-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"54c70f937f4ee037adfaae29a3987a31\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:34.563302 kubelet[2008]: I0813 00:53:34.563190 2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03237e9ef5cd3a6cd69672d5b1531cdf-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"03237e9ef5cd3a6cd69672d5b1531cdf\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:34.563302 kubelet[2008]: I0813 00:53:34.563214 2008 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03237e9ef5cd3a6cd69672d5b1531cdf-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"03237e9ef5cd3a6cd69672d5b1531cdf\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:34.566765 kubelet[2008]: E0813 00:53:34.566733 2008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-4e9ab5f8c8?timeout=10s\": dial tcp 10.200.4.32:6443: connect: connection refused" interval="400ms" Aug 13 00:53:34.725077 kubelet[2008]: E0813 00:53:34.724958 2008 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.32:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.32:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-a-4e9ab5f8c8.185b2d6a4b512492 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-a-4e9ab5f8c8,UID:ci-3510.3.8-a-4e9ab5f8c8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-a-4e9ab5f8c8,},FirstTimestamp:2025-08-13 00:53:34.344332434 +0000 UTC m=+1.271101796,LastTimestamp:2025-08-13 00:53:34.344332434 +0000 UTC m=+1.271101796,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-a-4e9ab5f8c8,}" Aug 13 00:53:34.744780 kubelet[2008]: I0813 00:53:34.744750 2008 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:34.745288 kubelet[2008]: E0813 00:53:34.745251 2008 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.32:6443/api/v1/nodes\": dial tcp 10.200.4.32:6443: connect: connection refused" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:34.839419 env[1437]: time="2025-08-13T00:53:34.839365225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8,Uid:54c70f937f4ee037adfaae29a3987a31,Namespace:kube-system,Attempt:0,}" Aug 13 00:53:34.844867 env[1437]: time="2025-08-13T00:53:34.844831845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8,Uid:03237e9ef5cd3a6cd69672d5b1531cdf,Namespace:kube-system,Attempt:0,}" Aug 13 00:53:34.854974 env[1437]: time="2025-08-13T00:53:34.854943951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8,Uid:ef4b33e87cb3ea80c95e2b747ef885c4,Namespace:kube-system,Attempt:0,}" Aug 13 00:53:34.967233 kubelet[2008]: E0813 00:53:34.967185 2008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-4e9ab5f8c8?timeout=10s\": dial tcp 10.200.4.32:6443: connect: connection refused" interval="800ms" Aug 13 00:53:35.148294 kubelet[2008]: I0813 00:53:35.147933 2008 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:35.148617 kubelet[2008]: E0813 00:53:35.148580 2008 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.32:6443/api/v1/nodes\": dial tcp 10.200.4.32:6443: connect: connection refused" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:35.319108 kubelet[2008]: W0813 00:53:35.319047 2008 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Aug 13 00:53:35.319263 kubelet[2008]: E0813 00:53:35.319117 2008 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:35.411139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1095089521.mount: Deactivated successfully. Aug 13 00:53:35.433163 env[1437]: time="2025-08-13T00:53:35.433117223Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:35.441168 env[1437]: time="2025-08-13T00:53:35.441062533Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:35.445411 env[1437]: time="2025-08-13T00:53:35.445373302Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:35.451192 env[1437]: time="2025-08-13T00:53:35.451159428Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:35.456444 env[1437]: time="2025-08-13T00:53:35.456408134Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:35.458983 env[1437]: time="2025-08-13T00:53:35.458949233Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:35.461684 env[1437]: time="2025-08-13T00:53:35.461649439Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:35.466528 env[1437]: time="2025-08-13T00:53:35.466494928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:35.470355 env[1437]: time="2025-08-13T00:53:35.470324678Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:35.473404 env[1437]: time="2025-08-13T00:53:35.473371397Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:35.484356 env[1437]: time="2025-08-13T00:53:35.484319425Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:35.495368 env[1437]: time="2025-08-13T00:53:35.495334456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:35.499757 kubelet[2008]: W0813 00:53:35.499726 2008 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Aug 13 00:53:35.499854 kubelet[2008]: E0813 00:53:35.499780 2008 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:35.549063 env[1437]: time="2025-08-13T00:53:35.542688608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:35.549063 env[1437]: time="2025-08-13T00:53:35.542720210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:35.549063 env[1437]: time="2025-08-13T00:53:35.542729710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:35.549063 env[1437]: time="2025-08-13T00:53:35.542917817Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b287902d2f3e61781e48e4fd0420ea926e3afd4566dffd4df57ef8063fec9f5 pid=2048 runtime=io.containerd.runc.v2 Aug 13 00:53:35.555501 kubelet[2008]: W0813 00:53:35.555294 2008 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Aug 13 00:53:35.555501 kubelet[2008]: E0813 00:53:35.555342 2008 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:35.560749 env[1437]: time="2025-08-13T00:53:35.560691513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:35.560917 env[1437]: time="2025-08-13T00:53:35.560895321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:35.561036 env[1437]: time="2025-08-13T00:53:35.561002725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:35.561276 env[1437]: time="2025-08-13T00:53:35.561240934Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/40c7d2c88162273613ec2447aa61cb78b03d05d25c0338d5436e325b9d64d26e pid=2065 runtime=io.containerd.runc.v2 Aug 13 00:53:35.579378 systemd[1]: Started cri-containerd-7b287902d2f3e61781e48e4fd0420ea926e3afd4566dffd4df57ef8063fec9f5.scope. Aug 13 00:53:35.592514 systemd[1]: Started cri-containerd-40c7d2c88162273613ec2447aa61cb78b03d05d25c0338d5436e325b9d64d26e.scope. Aug 13 00:53:35.599661 env[1437]: time="2025-08-13T00:53:35.598300484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:35.599661 env[1437]: time="2025-08-13T00:53:35.598443289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:35.599661 env[1437]: time="2025-08-13T00:53:35.598479391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:35.599661 env[1437]: time="2025-08-13T00:53:35.598684199Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/652f813794e18e75ea4df56f22698f514ac30fe1b21dc615c5e40d7c4d92de43 pid=2096 runtime=io.containerd.runc.v2 Aug 13 00:53:35.610876 kubelet[2008]: W0813 00:53:35.610794 2008 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-4e9ab5f8c8&limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Aug 13 00:53:35.611053 kubelet[2008]: E0813 00:53:35.610901 2008 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-4e9ab5f8c8&limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:53:35.620852 systemd[1]: Started cri-containerd-652f813794e18e75ea4df56f22698f514ac30fe1b21dc615c5e40d7c4d92de43.scope. Aug 13 00:53:35.692909 env[1437]: time="2025-08-13T00:53:35.691279220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8,Uid:03237e9ef5cd3a6cd69672d5b1531cdf,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b287902d2f3e61781e48e4fd0420ea926e3afd4566dffd4df57ef8063fec9f5\"" Aug 13 00:53:35.695511 env[1437]: time="2025-08-13T00:53:35.695464884Z" level=info msg="CreateContainer within sandbox \"7b287902d2f3e61781e48e4fd0420ea926e3afd4566dffd4df57ef8063fec9f5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:53:35.698385 env[1437]: time="2025-08-13T00:53:35.698347397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8,Uid:54c70f937f4ee037adfaae29a3987a31,Namespace:kube-system,Attempt:0,} returns sandbox id \"40c7d2c88162273613ec2447aa61cb78b03d05d25c0338d5436e325b9d64d26e\"" Aug 13 00:53:35.701422 env[1437]: time="2025-08-13T00:53:35.701392516Z" level=info msg="CreateContainer within sandbox \"40c7d2c88162273613ec2447aa61cb78b03d05d25c0338d5436e325b9d64d26e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:53:35.703318 env[1437]: time="2025-08-13T00:53:35.703284690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8,Uid:ef4b33e87cb3ea80c95e2b747ef885c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"652f813794e18e75ea4df56f22698f514ac30fe1b21dc615c5e40d7c4d92de43\"" Aug 13 00:53:35.705555 env[1437]: time="2025-08-13T00:53:35.705512677Z" level=info msg="CreateContainer within sandbox \"652f813794e18e75ea4df56f22698f514ac30fe1b21dc615c5e40d7c4d92de43\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:53:35.754998 env[1437]: time="2025-08-13T00:53:35.754915510Z" level=info msg="CreateContainer within sandbox \"7b287902d2f3e61781e48e4fd0420ea926e3afd4566dffd4df57ef8063fec9f5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b30e9398486a45e96b192da13a39bcd30a223b42372fdbfe4127d7f3780dd1e9\"" Aug 13 00:53:35.757309 env[1437]: time="2025-08-13T00:53:35.757274302Z" level=info msg="CreateContainer within sandbox \"40c7d2c88162273613ec2447aa61cb78b03d05d25c0338d5436e325b9d64d26e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"09dc9a9cee0634150944220336fb9ab889aa435fa37235fab36680f48dc8c1c9\"" Aug 13 00:53:35.757698 env[1437]: time="2025-08-13T00:53:35.757666517Z" level=info msg="StartContainer for \"b30e9398486a45e96b192da13a39bcd30a223b42372fdbfe4127d7f3780dd1e9\"" Aug 13 00:53:35.761619 env[1437]: time="2025-08-13T00:53:35.761571870Z" level=info msg="CreateContainer within sandbox \"652f813794e18e75ea4df56f22698f514ac30fe1b21dc615c5e40d7c4d92de43\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cb2879fb442c3ac5e92ce0df537aafa418b0d79b4c28e15e904db7447504e052\"" Aug 13 00:53:35.761875 env[1437]: time="2025-08-13T00:53:35.761849781Z" level=info msg="StartContainer for \"09dc9a9cee0634150944220336fb9ab889aa435fa37235fab36680f48dc8c1c9\"" Aug 13 00:53:35.764551 env[1437]: time="2025-08-13T00:53:35.764520185Z" level=info msg="StartContainer for \"cb2879fb442c3ac5e92ce0df537aafa418b0d79b4c28e15e904db7447504e052\"" Aug 13 00:53:35.767974 kubelet[2008]: E0813 00:53:35.767894 2008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-4e9ab5f8c8?timeout=10s\": dial tcp 10.200.4.32:6443: connect: connection refused" interval="1.6s" Aug 13 00:53:35.783366 systemd[1]: Started cri-containerd-b30e9398486a45e96b192da13a39bcd30a223b42372fdbfe4127d7f3780dd1e9.scope. Aug 13 00:53:35.793773 systemd[1]: Started cri-containerd-09dc9a9cee0634150944220336fb9ab889aa435fa37235fab36680f48dc8c1c9.scope. Aug 13 00:53:35.821504 systemd[1]: Started cri-containerd-cb2879fb442c3ac5e92ce0df537aafa418b0d79b4c28e15e904db7447504e052.scope. Aug 13 00:53:35.884745 env[1437]: time="2025-08-13T00:53:35.884700486Z" level=info msg="StartContainer for \"b30e9398486a45e96b192da13a39bcd30a223b42372fdbfe4127d7f3780dd1e9\" returns successfully" Aug 13 00:53:35.889539 env[1437]: time="2025-08-13T00:53:35.889495974Z" level=info msg="StartContainer for \"09dc9a9cee0634150944220336fb9ab889aa435fa37235fab36680f48dc8c1c9\" returns successfully" Aug 13 00:53:35.951281 kubelet[2008]: I0813 00:53:35.951155 2008 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:35.952275 kubelet[2008]: E0813 00:53:35.951593 2008 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.32:6443/api/v1/nodes\": dial tcp 10.200.4.32:6443: connect: connection refused" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:36.001786 env[1437]: time="2025-08-13T00:53:36.001725862Z" level=info msg="StartContainer for \"cb2879fb442c3ac5e92ce0df537aafa418b0d79b4c28e15e904db7447504e052\" returns successfully" Aug 13 00:53:36.440896 kubelet[2008]: E0813 00:53:36.440868 2008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-4e9ab5f8c8\" not found" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:36.445039 kubelet[2008]: E0813 00:53:36.445010 2008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-4e9ab5f8c8\" not found" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:36.445284 kubelet[2008]: E0813 00:53:36.445262 2008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-4e9ab5f8c8\" not found" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:37.448079 kubelet[2008]: E0813 00:53:37.447998 2008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-4e9ab5f8c8\" not found" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:37.449460 kubelet[2008]: E0813 00:53:37.449431 2008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-4e9ab5f8c8\" not found" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:37.450011 kubelet[2008]: E0813 00:53:37.449988 2008 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-4e9ab5f8c8\" not found" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:37.554142 kubelet[2008]: I0813 00:53:37.554111 2008 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:37.872969 kubelet[2008]: E0813 00:53:37.872917 2008 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-a-4e9ab5f8c8\" not found" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:38.096896 kubelet[2008]: I0813 00:53:38.096858 2008 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:38.162153 kubelet[2008]: I0813 00:53:38.162042 2008 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:38.170145 kubelet[2008]: E0813 00:53:38.170108 2008 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:38.170145 kubelet[2008]: I0813 00:53:38.170140 2008 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:38.171882 kubelet[2008]: E0813 00:53:38.171855 2008 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:38.171882 kubelet[2008]: I0813 00:53:38.171879 2008 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:38.175150 kubelet[2008]: E0813 00:53:38.175122 2008 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:38.327476 kubelet[2008]: I0813 00:53:38.327431 2008 apiserver.go:52] "Watching apiserver" Aug 13 00:53:38.362572 kubelet[2008]: I0813 00:53:38.362542 2008 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:53:38.446692 kubelet[2008]: I0813 00:53:38.446571 2008 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:38.449276 kubelet[2008]: E0813 00:53:38.449096 2008 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:39.816308 systemd[1]: Reloading. Aug 13 00:53:39.917497 /usr/lib/systemd/system-generators/torcx-generator[2300]: time="2025-08-13T00:53:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:53:39.917533 /usr/lib/systemd/system-generators/torcx-generator[2300]: time="2025-08-13T00:53:39Z" level=info msg="torcx already run" Aug 13 00:53:40.018381 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:53:40.018399 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:53:40.036228 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:53:40.148286 systemd[1]: Stopping kubelet.service... Aug 13 00:53:40.169572 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:53:40.169784 systemd[1]: Stopped kubelet.service. Aug 13 00:53:40.171726 systemd[1]: Starting kubelet.service... Aug 13 00:53:40.380505 systemd[1]: Started kubelet.service. Aug 13 00:53:40.936706 kubelet[2368]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:53:40.936706 kubelet[2368]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:53:40.936706 kubelet[2368]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:53:40.937226 kubelet[2368]: I0813 00:53:40.936791 2368 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:53:40.947387 kubelet[2368]: I0813 00:53:40.947355 2368 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:53:40.947387 kubelet[2368]: I0813 00:53:40.947378 2368 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:53:40.947648 kubelet[2368]: I0813 00:53:40.947627 2368 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:53:40.948708 kubelet[2368]: I0813 00:53:40.948683 2368 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:53:40.951661 kubelet[2368]: I0813 00:53:40.951637 2368 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:53:40.954862 kubelet[2368]: E0813 00:53:40.954827 2368 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:53:40.954862 kubelet[2368]: I0813 00:53:40.954861 2368 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:53:40.962312 kubelet[2368]: I0813 00:53:40.962292 2368 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:53:40.962680 kubelet[2368]: I0813 00:53:40.962647 2368 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:53:40.963081 kubelet[2368]: I0813 00:53:40.962770 2368 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-a-4e9ab5f8c8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:53:40.963287 kubelet[2368]: I0813 00:53:40.963272 2368 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:53:40.963364 kubelet[2368]: I0813 00:53:40.963355 2368 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:53:40.963482 kubelet[2368]: I0813 00:53:40.963471 2368 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:53:40.963696 kubelet[2368]: I0813 00:53:40.963682 2368 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:53:40.963799 kubelet[2368]: I0813 00:53:40.963787 2368 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:53:40.963886 kubelet[2368]: I0813 00:53:40.963876 2368 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:53:40.964013 kubelet[2368]: I0813 00:53:40.964001 2368 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:53:40.982830 kubelet[2368]: I0813 00:53:40.978068 2368 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:53:40.982830 kubelet[2368]: I0813 00:53:40.978612 2368 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:53:40.982830 kubelet[2368]: I0813 00:53:40.979126 2368 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:53:40.982830 kubelet[2368]: I0813 00:53:40.979154 2368 server.go:1287] "Started kubelet" Aug 13 00:53:40.982830 kubelet[2368]: I0813 00:53:40.981530 2368 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:53:40.989389 kubelet[2368]: I0813 00:53:40.988805 2368 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:53:40.990410 kubelet[2368]: I0813 00:53:40.990167 2368 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:53:40.991568 kubelet[2368]: I0813 00:53:40.991503 2368 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:53:40.991568 kubelet[2368]: I0813 00:53:40.991721 2368 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:53:40.992132 kubelet[2368]: I0813 00:53:40.991987 2368 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:53:40.994115 kubelet[2368]: I0813 00:53:40.993888 2368 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:53:40.994721 kubelet[2368]: I0813 00:53:40.994703 2368 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:53:40.994956 kubelet[2368]: I0813 00:53:40.994922 2368 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:53:40.996696 kubelet[2368]: I0813 00:53:40.996538 2368 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:53:40.996696 kubelet[2368]: I0813 00:53:40.996675 2368 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:53:40.999047 kubelet[2368]: I0813 00:53:40.998678 2368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:53:41.000519 kubelet[2368]: I0813 00:53:41.000031 2368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:53:41.000519 kubelet[2368]: I0813 00:53:41.000059 2368 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:53:41.000519 kubelet[2368]: I0813 00:53:41.000080 2368 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:53:41.000519 kubelet[2368]: I0813 00:53:41.000089 2368 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:53:41.000519 kubelet[2368]: E0813 00:53:41.000140 2368 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:53:41.001319 kubelet[2368]: I0813 00:53:41.001300 2368 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:53:41.027663 sudo[2397]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:53:41.028006 sudo[2397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 00:53:41.079375 kubelet[2368]: I0813 00:53:41.076403 2368 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:53:41.079375 kubelet[2368]: I0813 00:53:41.076425 2368 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:53:41.079375 kubelet[2368]: I0813 00:53:41.076446 2368 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:53:41.079375 kubelet[2368]: I0813 00:53:41.076676 2368 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:53:41.079375 kubelet[2368]: I0813 00:53:41.076696 2368 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:53:41.079375 kubelet[2368]: I0813 00:53:41.076720 2368 policy_none.go:49] "None policy: Start" Aug 13 00:53:41.079375 kubelet[2368]: I0813 00:53:41.076732 2368 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:53:41.079375 kubelet[2368]: I0813 00:53:41.076744 2368 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:53:41.079375 kubelet[2368]: I0813 00:53:41.076908 2368 state_mem.go:75] "Updated machine memory state" Aug 13 00:53:41.084833 kubelet[2368]: I0813 00:53:41.081722 2368 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:53:41.084833 kubelet[2368]: I0813 00:53:41.081895 2368 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:53:41.084833 kubelet[2368]: I0813 00:53:41.081916 2368 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:53:41.084833 kubelet[2368]: I0813 00:53:41.083447 2368 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:53:41.088554 kubelet[2368]: E0813 00:53:41.088529 2368 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:53:41.104057 kubelet[2368]: I0813 00:53:41.104030 2368 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:41.108494 kubelet[2368]: I0813 00:53:41.108461 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03237e9ef5cd3a6cd69672d5b1531cdf-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"03237e9ef5cd3a6cd69672d5b1531cdf\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:41.108609 kubelet[2368]: I0813 00:53:41.108507 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03237e9ef5cd3a6cd69672d5b1531cdf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"03237e9ef5cd3a6cd69672d5b1531cdf\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:41.108609 kubelet[2368]: I0813 00:53:41.108533 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54c70f937f4ee037adfaae29a3987a31-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"54c70f937f4ee037adfaae29a3987a31\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:41.108609 kubelet[2368]: I0813 00:53:41.108553 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03237e9ef5cd3a6cd69672d5b1531cdf-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"03237e9ef5cd3a6cd69672d5b1531cdf\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:41.108609 kubelet[2368]: I0813 00:53:41.108577 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03237e9ef5cd3a6cd69672d5b1531cdf-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"03237e9ef5cd3a6cd69672d5b1531cdf\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:41.108609 kubelet[2368]: I0813 00:53:41.108596 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03237e9ef5cd3a6cd69672d5b1531cdf-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"03237e9ef5cd3a6cd69672d5b1531cdf\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:41.108810 kubelet[2368]: I0813 00:53:41.108615 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54c70f937f4ee037adfaae29a3987a31-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"54c70f937f4ee037adfaae29a3987a31\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:41.108810 kubelet[2368]: I0813 00:53:41.108636 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54c70f937f4ee037adfaae29a3987a31-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"54c70f937f4ee037adfaae29a3987a31\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:41.108810 kubelet[2368]: I0813 00:53:41.108660 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef4b33e87cb3ea80c95e2b747ef885c4-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8\" (UID: \"ef4b33e87cb3ea80c95e2b747ef885c4\") " pod="kube-system/kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:41.108953 kubelet[2368]: I0813 00:53:41.108870 2368 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:41.113503 kubelet[2368]: I0813 00:53:41.113384 2368 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:41.116414 kubelet[2368]: W0813 00:53:41.116390 2368 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:53:41.118225 kubelet[2368]: W0813 00:53:41.118206 2368 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:53:41.120642 kubelet[2368]: W0813 00:53:41.120622 2368 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:53:41.219452 kubelet[2368]: I0813 00:53:41.219364 2368 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:41.232620 kubelet[2368]: I0813 00:53:41.232592 2368 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:41.232846 kubelet[2368]: I0813 00:53:41.232835 2368 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:41.628525 sudo[2397]: pam_unix(sudo:session): session closed for user root Aug 13 00:53:41.970419 kubelet[2368]: I0813 00:53:41.970331 2368 apiserver.go:52] "Watching apiserver" Aug 13 00:53:41.996664 kubelet[2368]: I0813 00:53:41.996633 2368 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:53:42.038261 kubelet[2368]: I0813 00:53:42.038233 2368 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:42.049383 kubelet[2368]: W0813 00:53:42.049358 2368 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:53:42.049584 kubelet[2368]: E0813 00:53:42.049565 2368 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8" Aug 13 00:53:42.073179 kubelet[2368]: I0813 00:53:42.073039 2368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-a-4e9ab5f8c8" podStartSLOduration=1.073019361 podStartE2EDuration="1.073019361s" podCreationTimestamp="2025-08-13 00:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:53:42.061550488 +0000 UTC m=+1.676517754" watchObservedRunningTime="2025-08-13 00:53:42.073019361 +0000 UTC m=+1.687986627" Aug 13 00:53:42.083834 kubelet[2368]: I0813 00:53:42.083780 2368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-a-4e9ab5f8c8" podStartSLOduration=1.08376351 podStartE2EDuration="1.08376351s" podCreationTimestamp="2025-08-13 00:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:53:42.073677482 +0000 UTC m=+1.688644748" watchObservedRunningTime="2025-08-13 00:53:42.08376351 +0000 UTC m=+1.698730676" Aug 13 00:53:42.094960 kubelet[2368]: I0813 00:53:42.094892 2368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-4e9ab5f8c8" podStartSLOduration=1.094873671 podStartE2EDuration="1.094873671s" podCreationTimestamp="2025-08-13 00:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:53:42.085763275 +0000 UTC m=+1.700730441" watchObservedRunningTime="2025-08-13 00:53:42.094873671 +0000 UTC m=+1.709840837" Aug 13 00:53:43.050043 sudo[1716]: pam_unix(sudo:session): session closed for user root Aug 13 00:53:43.151875 sshd[1713]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:43.155419 systemd-logind[1423]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:53:43.155686 systemd[1]: sshd@4-10.200.4.32:22-10.200.16.10:59076.service: Deactivated successfully. Aug 13 00:53:43.156787 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:53:43.157040 systemd[1]: session-7.scope: Consumed 4.067s CPU time. Aug 13 00:53:43.157992 systemd-logind[1423]: Removed session 7. Aug 13 00:53:45.991780 kubelet[2368]: I0813 00:53:45.991731 2368 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:53:45.992341 env[1437]: time="2025-08-13T00:53:45.992117695Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:53:45.992725 kubelet[2368]: I0813 00:53:45.992700 2368 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:53:46.787669 systemd[1]: Created slice kubepods-besteffort-pod85c251f4_1f5b_4b5e_9ab3_6514890f4401.slice. Aug 13 00:53:46.803458 systemd[1]: Created slice kubepods-burstable-podf9c77148_164b_49db_a560_04f26bdb3fb5.slice. Aug 13 00:53:46.848776 kubelet[2368]: I0813 00:53:46.848725 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhrxg\" (UniqueName: \"kubernetes.io/projected/85c251f4-1f5b-4b5e-9ab3-6514890f4401-kube-api-access-fhrxg\") pod \"kube-proxy-vg8dt\" (UID: \"85c251f4-1f5b-4b5e-9ab3-6514890f4401\") " pod="kube-system/kube-proxy-vg8dt" Aug 13 00:53:46.848971 kubelet[2368]: I0813 00:53:46.848785 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-host-proc-sys-net\") pod \"cilium-rxbc6\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " pod="kube-system/cilium-rxbc6" Aug 13 00:53:46.848971 kubelet[2368]: I0813 00:53:46.848812 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxmqg\" (UniqueName: \"kubernetes.io/projected/f9c77148-164b-49db-a560-04f26bdb3fb5-kube-api-access-vxmqg\") pod \"cilium-rxbc6\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " pod="kube-system/cilium-rxbc6" Aug 13 00:53:46.848971 kubelet[2368]: I0813 00:53:46.848832 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9c77148-164b-49db-a560-04f26bdb3fb5-hubble-tls\") pod \"cilium-rxbc6\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " pod="kube-system/cilium-rxbc6" Aug 13 00:53:46.848971 kubelet[2368]: I0813 00:53:46.848856 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-lib-modules\") pod \"cilium-rxbc6\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " pod="kube-system/cilium-rxbc6" Aug 13 00:53:46.848971 kubelet[2368]: I0813 00:53:46.848880 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-cilium-run\") pod \"cilium-rxbc6\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " pod="kube-system/cilium-rxbc6" Aug 13 00:53:46.848971 kubelet[2368]: I0813 00:53:46.848899 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9c77148-164b-49db-a560-04f26bdb3fb5-clustermesh-secrets\") pod \"cilium-rxbc6\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " pod="kube-system/cilium-rxbc6" Aug 13 00:53:46.849246 kubelet[2368]: I0813 00:53:46.848951 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85c251f4-1f5b-4b5e-9ab3-6514890f4401-lib-modules\") pod \"kube-proxy-vg8dt\" (UID: \"85c251f4-1f5b-4b5e-9ab3-6514890f4401\") " pod="kube-system/kube-proxy-vg8dt" Aug 13 00:53:46.849246 kubelet[2368]: I0813 00:53:46.848972 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9c77148-164b-49db-a560-04f26bdb3fb5-cilium-config-path\") pod \"cilium-rxbc6\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " pod="kube-system/cilium-rxbc6" Aug 13 00:53:46.849246 kubelet[2368]: I0813 00:53:46.848994 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/85c251f4-1f5b-4b5e-9ab3-6514890f4401-kube-proxy\") pod \"kube-proxy-vg8dt\" (UID: \"85c251f4-1f5b-4b5e-9ab3-6514890f4401\") " pod="kube-system/kube-proxy-vg8dt" Aug 13 00:53:46.849246 kubelet[2368]: I0813 00:53:46.849015 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-bpf-maps\") pod \"cilium-rxbc6\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " pod="kube-system/cilium-rxbc6" Aug 13 00:53:46.849246 kubelet[2368]: I0813 00:53:46.849042 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-hostproc\") pod \"cilium-rxbc6\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " pod="kube-system/cilium-rxbc6" Aug 13 00:53:46.849246 kubelet[2368]: I0813 00:53:46.849065 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-host-proc-sys-kernel\") pod \"cilium-rxbc6\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " pod="kube-system/cilium-rxbc6" Aug 13 00:53:46.849489 kubelet[2368]: I0813 00:53:46.849093 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-xtables-lock\") pod \"cilium-rxbc6\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " pod="kube-system/cilium-rxbc6" Aug 13 00:53:46.849489 kubelet[2368]: I0813 00:53:46.849118 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85c251f4-1f5b-4b5e-9ab3-6514890f4401-xtables-lock\") pod \"kube-proxy-vg8dt\" (UID: \"85c251f4-1f5b-4b5e-9ab3-6514890f4401\") " pod="kube-system/kube-proxy-vg8dt" Aug 13 00:53:46.849489 kubelet[2368]: I0813 00:53:46.849139 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-cilium-cgroup\") pod \"cilium-rxbc6\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " pod="kube-system/cilium-rxbc6" Aug 13 00:53:46.849489 kubelet[2368]: I0813 00:53:46.849161 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-cni-path\") pod \"cilium-rxbc6\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " pod="kube-system/cilium-rxbc6" Aug 13 00:53:46.849489 kubelet[2368]: I0813 00:53:46.849187 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-etc-cni-netd\") pod \"cilium-rxbc6\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " pod="kube-system/cilium-rxbc6" Aug 13 00:53:46.969369 kubelet[2368]: I0813 00:53:46.969328 2368 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:53:47.058190 systemd[1]: Created slice kubepods-besteffort-pod2be47590_73e6_4560_b4e0_3dcb1e538eee.slice. Aug 13 00:53:47.097702 env[1437]: time="2025-08-13T00:53:47.097660899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vg8dt,Uid:85c251f4-1f5b-4b5e-9ab3-6514890f4401,Namespace:kube-system,Attempt:0,}" Aug 13 00:53:47.107416 env[1437]: time="2025-08-13T00:53:47.107375677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rxbc6,Uid:f9c77148-164b-49db-a560-04f26bdb3fb5,Namespace:kube-system,Attempt:0,}" Aug 13 00:53:47.145389 env[1437]: time="2025-08-13T00:53:47.145302663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:47.145389 env[1437]: time="2025-08-13T00:53:47.145369365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:47.145920 env[1437]: time="2025-08-13T00:53:47.145717075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:47.146198 env[1437]: time="2025-08-13T00:53:47.146155088Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d2170be7719763a65ac70821f9ebec9e378382b9044736d1264bbd3daad8b69 pid=2451 runtime=io.containerd.runc.v2 Aug 13 00:53:47.151313 kubelet[2368]: I0813 00:53:47.151198 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg9s4\" (UniqueName: \"kubernetes.io/projected/2be47590-73e6-4560-b4e0-3dcb1e538eee-kube-api-access-bg9s4\") pod \"cilium-operator-6c4d7847fc-jxt85\" (UID: \"2be47590-73e6-4560-b4e0-3dcb1e538eee\") " pod="kube-system/cilium-operator-6c4d7847fc-jxt85" Aug 13 00:53:47.151313 kubelet[2368]: I0813 00:53:47.151248 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2be47590-73e6-4560-b4e0-3dcb1e538eee-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jxt85\" (UID: \"2be47590-73e6-4560-b4e0-3dcb1e538eee\") " pod="kube-system/cilium-operator-6c4d7847fc-jxt85" Aug 13 00:53:47.156819 env[1437]: time="2025-08-13T00:53:47.156624487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:47.156819 env[1437]: time="2025-08-13T00:53:47.156669389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:47.156819 env[1437]: time="2025-08-13T00:53:47.156694689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:47.157104 env[1437]: time="2025-08-13T00:53:47.156854694Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d pid=2470 runtime=io.containerd.runc.v2 Aug 13 00:53:47.166526 systemd[1]: Started cri-containerd-1d2170be7719763a65ac70821f9ebec9e378382b9044736d1264bbd3daad8b69.scope. Aug 13 00:53:47.179049 systemd[1]: Started cri-containerd-430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d.scope. Aug 13 00:53:47.221192 env[1437]: time="2025-08-13T00:53:47.221149435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vg8dt,Uid:85c251f4-1f5b-4b5e-9ab3-6514890f4401,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d2170be7719763a65ac70821f9ebec9e378382b9044736d1264bbd3daad8b69\"" Aug 13 00:53:47.224582 env[1437]: time="2025-08-13T00:53:47.224424029Z" level=info msg="CreateContainer within sandbox \"1d2170be7719763a65ac70821f9ebec9e378382b9044736d1264bbd3daad8b69\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:53:47.231590 env[1437]: time="2025-08-13T00:53:47.231541733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rxbc6,Uid:f9c77148-164b-49db-a560-04f26bdb3fb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d\"" Aug 13 00:53:47.233087 env[1437]: time="2025-08-13T00:53:47.233055276Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:53:47.271932 env[1437]: time="2025-08-13T00:53:47.271883488Z" level=info msg="CreateContainer within sandbox \"1d2170be7719763a65ac70821f9ebec9e378382b9044736d1264bbd3daad8b69\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"96ebb9c2c9370947e7bf54c0ebbde9c5c1954f9b485dacd4d0bf7ca18bb1f7bc\"" Aug 13 00:53:47.272485 env[1437]: time="2025-08-13T00:53:47.272452705Z" level=info msg="StartContainer for \"96ebb9c2c9370947e7bf54c0ebbde9c5c1954f9b485dacd4d0bf7ca18bb1f7bc\"" Aug 13 00:53:47.290170 systemd[1]: Started cri-containerd-96ebb9c2c9370947e7bf54c0ebbde9c5c1954f9b485dacd4d0bf7ca18bb1f7bc.scope. Aug 13 00:53:47.323112 env[1437]: time="2025-08-13T00:53:47.323026953Z" level=info msg="StartContainer for \"96ebb9c2c9370947e7bf54c0ebbde9c5c1954f9b485dacd4d0bf7ca18bb1f7bc\" returns successfully" Aug 13 00:53:47.363790 env[1437]: time="2025-08-13T00:53:47.363745719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jxt85,Uid:2be47590-73e6-4560-b4e0-3dcb1e538eee,Namespace:kube-system,Attempt:0,}" Aug 13 00:53:47.397352 env[1437]: time="2025-08-13T00:53:47.397191977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:47.397352 env[1437]: time="2025-08-13T00:53:47.397223278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:47.397352 env[1437]: time="2025-08-13T00:53:47.397232678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:47.397646 env[1437]: time="2025-08-13T00:53:47.397461785Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/928f62d62bb89784a48c8726cfbb4c62a02aa255dbb10a8437f4ff168e3424fc pid=2576 runtime=io.containerd.runc.v2 Aug 13 00:53:47.412608 systemd[1]: Started cri-containerd-928f62d62bb89784a48c8726cfbb4c62a02aa255dbb10a8437f4ff168e3424fc.scope. Aug 13 00:53:47.475273 env[1437]: time="2025-08-13T00:53:47.475211512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jxt85,Uid:2be47590-73e6-4560-b4e0-3dcb1e538eee,Namespace:kube-system,Attempt:0,} returns sandbox id \"928f62d62bb89784a48c8726cfbb4c62a02aa255dbb10a8437f4ff168e3424fc\"" Aug 13 00:53:48.065037 kubelet[2368]: I0813 00:53:48.064976 2368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vg8dt" podStartSLOduration=2.064956457 podStartE2EDuration="2.064956457s" podCreationTimestamp="2025-08-13 00:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:53:48.064817353 +0000 UTC m=+7.679784619" watchObservedRunningTime="2025-08-13 00:53:48.064956457 +0000 UTC m=+7.679923723" Aug 13 00:53:53.313538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2457408987.mount: Deactivated successfully. Aug 13 00:53:56.348818 env[1437]: time="2025-08-13T00:53:56.348765947Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:56.354224 env[1437]: time="2025-08-13T00:53:56.354179672Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:56.357893 env[1437]: time="2025-08-13T00:53:56.357857957Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:56.358530 env[1437]: time="2025-08-13T00:53:56.358486171Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:53:56.360908 env[1437]: time="2025-08-13T00:53:56.360869226Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:53:56.363223 env[1437]: time="2025-08-13T00:53:56.363186279Z" level=info msg="CreateContainer within sandbox \"430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:53:56.395179 env[1437]: time="2025-08-13T00:53:56.395127815Z" level=info msg="CreateContainer within sandbox \"430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3\"" Aug 13 00:53:56.395748 env[1437]: time="2025-08-13T00:53:56.395627826Z" level=info msg="StartContainer for \"9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3\"" Aug 13 00:53:56.425859 systemd[1]: Started cri-containerd-9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3.scope. Aug 13 00:53:56.455608 env[1437]: time="2025-08-13T00:53:56.455555606Z" level=info msg="StartContainer for \"9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3\" returns successfully" Aug 13 00:53:56.462869 systemd[1]: cri-containerd-9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3.scope: Deactivated successfully. Aug 13 00:53:57.384550 systemd[1]: run-containerd-runc-k8s.io-9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3-runc.n4W0Db.mount: Deactivated successfully. Aug 13 00:53:57.384688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3-rootfs.mount: Deactivated successfully. Aug 13 00:54:00.181732 env[1437]: time="2025-08-13T00:54:00.181662316Z" level=info msg="shim disconnected" id=9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3 Aug 13 00:54:00.181732 env[1437]: time="2025-08-13T00:54:00.181725418Z" level=warning msg="cleaning up after shim disconnected" id=9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3 namespace=k8s.io Aug 13 00:54:00.181732 env[1437]: time="2025-08-13T00:54:00.181737118Z" level=info msg="cleaning up dead shim" Aug 13 00:54:00.189342 env[1437]: time="2025-08-13T00:54:00.189302277Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2781 runtime=io.containerd.runc.v2\n" Aug 13 00:54:00.869892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3871649377.mount: Deactivated successfully. Aug 13 00:54:01.101387 env[1437]: time="2025-08-13T00:54:01.101335683Z" level=info msg="CreateContainer within sandbox \"430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:54:01.133699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241347521.mount: Deactivated successfully. Aug 13 00:54:01.140278 env[1437]: time="2025-08-13T00:54:01.140237282Z" level=info msg="CreateContainer within sandbox \"430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513\"" Aug 13 00:54:01.142962 env[1437]: time="2025-08-13T00:54:01.141052899Z" level=info msg="StartContainer for \"c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513\"" Aug 13 00:54:01.217106 systemd[1]: Started cri-containerd-c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513.scope. Aug 13 00:54:01.273765 env[1437]: time="2025-08-13T00:54:01.273634721Z" level=info msg="StartContainer for \"c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513\" returns successfully" Aug 13 00:54:01.287963 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:54:01.288255 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:54:01.288433 systemd[1]: Stopping systemd-sysctl.service... Aug 13 00:54:01.294156 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:54:01.302406 systemd[1]: cri-containerd-c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513.scope: Deactivated successfully. Aug 13 00:54:01.307499 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:54:01.462008 env[1437]: time="2025-08-13T00:54:01.460929767Z" level=info msg="shim disconnected" id=c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513 Aug 13 00:54:01.462344 env[1437]: time="2025-08-13T00:54:01.462301495Z" level=warning msg="cleaning up after shim disconnected" id=c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513 namespace=k8s.io Aug 13 00:54:01.462451 env[1437]: time="2025-08-13T00:54:01.462436298Z" level=info msg="cleaning up dead shim" Aug 13 00:54:01.496196 env[1437]: time="2025-08-13T00:54:01.496146990Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2846 runtime=io.containerd.runc.v2\n" Aug 13 00:54:01.880082 env[1437]: time="2025-08-13T00:54:01.880028972Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:01.890809 env[1437]: time="2025-08-13T00:54:01.890718192Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:01.897181 env[1437]: time="2025-08-13T00:54:01.897139723Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:01.897720 env[1437]: time="2025-08-13T00:54:01.897682235Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:54:01.901178 env[1437]: time="2025-08-13T00:54:01.901137906Z" level=info msg="CreateContainer within sandbox \"928f62d62bb89784a48c8726cfbb4c62a02aa255dbb10a8437f4ff168e3424fc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:54:01.926289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3502137161.mount: Deactivated successfully. Aug 13 00:54:01.936354 env[1437]: time="2025-08-13T00:54:01.936308328Z" level=info msg="CreateContainer within sandbox \"928f62d62bb89784a48c8726cfbb4c62a02aa255dbb10a8437f4ff168e3424fc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06\"" Aug 13 00:54:01.937215 env[1437]: time="2025-08-13T00:54:01.937184546Z" level=info msg="StartContainer for \"36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06\"" Aug 13 00:54:01.955765 systemd[1]: Started cri-containerd-36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06.scope. Aug 13 00:54:01.989245 env[1437]: time="2025-08-13T00:54:01.989191914Z" level=info msg="StartContainer for \"36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06\" returns successfully" Aug 13 00:54:02.097213 env[1437]: time="2025-08-13T00:54:02.097091686Z" level=info msg="CreateContainer within sandbox \"430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:54:02.150204 env[1437]: time="2025-08-13T00:54:02.150086250Z" level=info msg="CreateContainer within sandbox \"430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2\"" Aug 13 00:54:02.151213 env[1437]: time="2025-08-13T00:54:02.151182872Z" level=info msg="StartContainer for \"58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2\"" Aug 13 00:54:02.191513 systemd[1]: Started cri-containerd-58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2.scope. Aug 13 00:54:02.192716 kubelet[2368]: I0813 00:54:02.191613 2368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jxt85" podStartSLOduration=0.770569409 podStartE2EDuration="15.191589183s" podCreationTimestamp="2025-08-13 00:53:47 +0000 UTC" firstStartedPulling="2025-08-13 00:53:47.477557679 +0000 UTC m=+7.092524845" lastFinishedPulling="2025-08-13 00:54:01.898577353 +0000 UTC m=+21.513544619" observedRunningTime="2025-08-13 00:54:02.116773481 +0000 UTC m=+21.731740747" watchObservedRunningTime="2025-08-13 00:54:02.191589183 +0000 UTC m=+21.806556349" Aug 13 00:54:02.269545 env[1437]: time="2025-08-13T00:54:02.269485047Z" level=info msg="StartContainer for \"58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2\" returns successfully" Aug 13 00:54:02.287679 systemd[1]: cri-containerd-58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2.scope: Deactivated successfully. Aug 13 00:54:02.619075 env[1437]: time="2025-08-13T00:54:02.619020766Z" level=info msg="shim disconnected" id=58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2 Aug 13 00:54:02.619655 env[1437]: time="2025-08-13T00:54:02.619625878Z" level=warning msg="cleaning up after shim disconnected" id=58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2 namespace=k8s.io Aug 13 00:54:02.619765 env[1437]: time="2025-08-13T00:54:02.619746681Z" level=info msg="cleaning up dead shim" Aug 13 00:54:02.635061 env[1437]: time="2025-08-13T00:54:02.635010687Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2942 runtime=io.containerd.runc.v2\n" Aug 13 00:54:03.101325 env[1437]: time="2025-08-13T00:54:03.101281006Z" level=info msg="CreateContainer within sandbox \"430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:54:03.134954 env[1437]: time="2025-08-13T00:54:03.134908767Z" level=info msg="CreateContainer within sandbox \"430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396\"" Aug 13 00:54:03.135579 env[1437]: time="2025-08-13T00:54:03.135541179Z" level=info msg="StartContainer for \"997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396\"" Aug 13 00:54:03.162073 systemd[1]: Started cri-containerd-997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396.scope. Aug 13 00:54:03.191892 systemd[1]: cri-containerd-997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396.scope: Deactivated successfully. Aug 13 00:54:03.193768 env[1437]: time="2025-08-13T00:54:03.193688521Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9c77148_164b_49db_a560_04f26bdb3fb5.slice/cri-containerd-997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396.scope/memory.events\": no such file or directory" Aug 13 00:54:03.197581 env[1437]: time="2025-08-13T00:54:03.197545797Z" level=info msg="StartContainer for \"997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396\" returns successfully" Aug 13 00:54:03.223307 env[1437]: time="2025-08-13T00:54:03.223257602Z" level=info msg="shim disconnected" id=997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396 Aug 13 00:54:03.223307 env[1437]: time="2025-08-13T00:54:03.223307603Z" level=warning msg="cleaning up after shim disconnected" id=997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396 namespace=k8s.io Aug 13 00:54:03.223588 env[1437]: time="2025-08-13T00:54:03.223318503Z" level=info msg="cleaning up dead shim" Aug 13 00:54:03.230580 env[1437]: time="2025-08-13T00:54:03.230541145Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2996 runtime=io.containerd.runc.v2\n" Aug 13 00:54:03.862104 systemd[1]: run-containerd-runc-k8s.io-997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396-runc.mr8xvY.mount: Deactivated successfully. Aug 13 00:54:03.862255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396-rootfs.mount: Deactivated successfully. Aug 13 00:54:04.108264 env[1437]: time="2025-08-13T00:54:04.108214039Z" level=info msg="CreateContainer within sandbox \"430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:54:04.140421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3450674369.mount: Deactivated successfully. Aug 13 00:54:04.152840 env[1437]: time="2025-08-13T00:54:04.152796396Z" level=info msg="CreateContainer within sandbox \"430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549\"" Aug 13 00:54:04.153718 env[1437]: time="2025-08-13T00:54:04.153687513Z" level=info msg="StartContainer for \"56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549\"" Aug 13 00:54:04.195549 systemd[1]: Started cri-containerd-56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549.scope. Aug 13 00:54:04.268309 env[1437]: time="2025-08-13T00:54:04.268253615Z" level=info msg="StartContainer for \"56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549\" returns successfully" Aug 13 00:54:04.396406 kubelet[2368]: I0813 00:54:04.395301 2368 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:54:04.448333 systemd[1]: Created slice kubepods-burstable-pod077d314d_b05c_4baf_945b_acd0fbcc400b.slice. Aug 13 00:54:04.455800 systemd[1]: Created slice kubepods-burstable-poda909486a_5460_4bab_96ab_cbf67094b754.slice. Aug 13 00:54:04.479479 kubelet[2368]: I0813 00:54:04.479443 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8bmj\" (UniqueName: \"kubernetes.io/projected/077d314d-b05c-4baf-945b-acd0fbcc400b-kube-api-access-s8bmj\") pod \"coredns-668d6bf9bc-qrlnq\" (UID: \"077d314d-b05c-4baf-945b-acd0fbcc400b\") " pod="kube-system/coredns-668d6bf9bc-qrlnq" Aug 13 00:54:04.479646 kubelet[2368]: I0813 00:54:04.479488 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/077d314d-b05c-4baf-945b-acd0fbcc400b-config-volume\") pod \"coredns-668d6bf9bc-qrlnq\" (UID: \"077d314d-b05c-4baf-945b-acd0fbcc400b\") " pod="kube-system/coredns-668d6bf9bc-qrlnq" Aug 13 00:54:04.479646 kubelet[2368]: I0813 00:54:04.479527 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a909486a-5460-4bab-96ab-cbf67094b754-config-volume\") pod \"coredns-668d6bf9bc-6w6f6\" (UID: \"a909486a-5460-4bab-96ab-cbf67094b754\") " pod="kube-system/coredns-668d6bf9bc-6w6f6" Aug 13 00:54:04.479646 kubelet[2368]: I0813 00:54:04.479555 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcc9g\" (UniqueName: \"kubernetes.io/projected/a909486a-5460-4bab-96ab-cbf67094b754-kube-api-access-kcc9g\") pod \"coredns-668d6bf9bc-6w6f6\" (UID: \"a909486a-5460-4bab-96ab-cbf67094b754\") " pod="kube-system/coredns-668d6bf9bc-6w6f6" Aug 13 00:54:04.754326 env[1437]: time="2025-08-13T00:54:04.754217254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qrlnq,Uid:077d314d-b05c-4baf-945b-acd0fbcc400b,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:04.759321 env[1437]: time="2025-08-13T00:54:04.759285551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6w6f6,Uid:a909486a-5460-4bab-96ab-cbf67094b754,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:06.774965 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 00:54:06.775673 systemd-networkd[1594]: cilium_host: Link UP Aug 13 00:54:06.775846 systemd-networkd[1594]: cilium_net: Link UP Aug 13 00:54:06.775851 systemd-networkd[1594]: cilium_net: Gained carrier Aug 13 00:54:06.778571 systemd-networkd[1594]: cilium_host: Gained carrier Aug 13 00:54:06.778794 systemd-networkd[1594]: cilium_host: Gained IPv6LL Aug 13 00:54:07.022672 systemd-networkd[1594]: cilium_vxlan: Link UP Aug 13 00:54:07.022682 systemd-networkd[1594]: cilium_vxlan: Gained carrier Aug 13 00:54:07.291975 kernel: NET: Registered PF_ALG protocol family Aug 13 00:54:07.458139 systemd-networkd[1594]: cilium_net: Gained IPv6LL Aug 13 00:54:08.172477 systemd-networkd[1594]: lxc_health: Link UP Aug 13 00:54:08.204341 systemd-networkd[1594]: lxc_health: Gained carrier Aug 13 00:54:08.204968 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:54:08.828545 systemd-networkd[1594]: lxc77524712f18d: Link UP Aug 13 00:54:08.842005 kernel: eth0: renamed from tmpcc331 Aug 13 00:54:08.850056 systemd-networkd[1594]: lxca97f81350f2a: Link UP Aug 13 00:54:08.859537 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc77524712f18d: link becomes ready Aug 13 00:54:08.859202 systemd-networkd[1594]: lxc77524712f18d: Gained carrier Aug 13 00:54:08.866213 kernel: eth0: renamed from tmpff899 Aug 13 00:54:08.866344 systemd-networkd[1594]: cilium_vxlan: Gained IPv6LL Aug 13 00:54:08.875129 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca97f81350f2a: link becomes ready Aug 13 00:54:08.874917 systemd-networkd[1594]: lxca97f81350f2a: Gained carrier Aug 13 00:54:09.145251 kubelet[2368]: I0813 00:54:09.145114 2368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rxbc6" podStartSLOduration=14.017542522 podStartE2EDuration="23.145092368s" podCreationTimestamp="2025-08-13 00:53:46 +0000 UTC" firstStartedPulling="2025-08-13 00:53:47.232654865 +0000 UTC m=+6.847622031" lastFinishedPulling="2025-08-13 00:53:56.360204611 +0000 UTC m=+15.975171877" observedRunningTime="2025-08-13 00:54:05.136983253 +0000 UTC m=+24.751950419" watchObservedRunningTime="2025-08-13 00:54:09.145092368 +0000 UTC m=+28.760059634" Aug 13 00:54:09.315181 systemd-networkd[1594]: lxc_health: Gained IPv6LL Aug 13 00:54:10.339125 systemd-networkd[1594]: lxc77524712f18d: Gained IPv6LL Aug 13 00:54:10.594101 systemd-networkd[1594]: lxca97f81350f2a: Gained IPv6LL Aug 13 00:54:12.393322 env[1437]: time="2025-08-13T00:54:12.393119166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:12.393322 env[1437]: time="2025-08-13T00:54:12.393161867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:12.393322 env[1437]: time="2025-08-13T00:54:12.393176667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:12.394007 env[1437]: time="2025-08-13T00:54:12.393962580Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc331ab2de864fceffbb8cd5556b70fbb5264c29ab0a4dfb1708d24df414d9ed pid=3545 runtime=io.containerd.runc.v2 Aug 13 00:54:12.452816 systemd[1]: Started cri-containerd-cc331ab2de864fceffbb8cd5556b70fbb5264c29ab0a4dfb1708d24df414d9ed.scope. Aug 13 00:54:12.464660 env[1437]: time="2025-08-13T00:54:12.464584129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:12.464879 env[1437]: time="2025-08-13T00:54:12.464851033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:12.465047 env[1437]: time="2025-08-13T00:54:12.465019736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:12.466373 env[1437]: time="2025-08-13T00:54:12.466268156Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff8999bc3f64e4e5512b8361ee6cd7a41a1eef6787c0a5a9249ac89ce80263c2 pid=3573 runtime=io.containerd.runc.v2 Aug 13 00:54:12.503049 systemd[1]: Started cri-containerd-ff8999bc3f64e4e5512b8361ee6cd7a41a1eef6787c0a5a9249ac89ce80263c2.scope. Aug 13 00:54:12.550563 env[1437]: time="2025-08-13T00:54:12.550519126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6w6f6,Uid:a909486a-5460-4bab-96ab-cbf67094b754,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc331ab2de864fceffbb8cd5556b70fbb5264c29ab0a4dfb1708d24df414d9ed\"" Aug 13 00:54:12.555408 env[1437]: time="2025-08-13T00:54:12.555370405Z" level=info msg="CreateContainer within sandbox \"cc331ab2de864fceffbb8cd5556b70fbb5264c29ab0a4dfb1708d24df414d9ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:54:12.592971 env[1437]: time="2025-08-13T00:54:12.591518993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qrlnq,Uid:077d314d-b05c-4baf-945b-acd0fbcc400b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff8999bc3f64e4e5512b8361ee6cd7a41a1eef6787c0a5a9249ac89ce80263c2\"" Aug 13 00:54:12.594873 env[1437]: time="2025-08-13T00:54:12.594831747Z" level=info msg="CreateContainer within sandbox \"ff8999bc3f64e4e5512b8361ee6cd7a41a1eef6787c0a5a9249ac89ce80263c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:54:12.623359 env[1437]: time="2025-08-13T00:54:12.623309910Z" level=info msg="CreateContainer within sandbox \"cc331ab2de864fceffbb8cd5556b70fbb5264c29ab0a4dfb1708d24df414d9ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1fe6ab804f89dd927a0d06476c578b1abb71fbbb006fa3f6df76610075a085c4\"" Aug 13 00:54:12.624702 env[1437]: time="2025-08-13T00:54:12.624667932Z" level=info msg="StartContainer for \"1fe6ab804f89dd927a0d06476c578b1abb71fbbb006fa3f6df76610075a085c4\"" Aug 13 00:54:12.642416 systemd[1]: Started cri-containerd-1fe6ab804f89dd927a0d06476c578b1abb71fbbb006fa3f6df76610075a085c4.scope. Aug 13 00:54:12.648163 env[1437]: time="2025-08-13T00:54:12.648063813Z" level=info msg="CreateContainer within sandbox \"ff8999bc3f64e4e5512b8361ee6cd7a41a1eef6787c0a5a9249ac89ce80263c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a74a79de495d02e2d69c7b16f0169ce9cf4a6eedbbc0a4e0b98f229771c8f0d\"" Aug 13 00:54:12.649091 env[1437]: time="2025-08-13T00:54:12.648880026Z" level=info msg="StartContainer for \"9a74a79de495d02e2d69c7b16f0169ce9cf4a6eedbbc0a4e0b98f229771c8f0d\"" Aug 13 00:54:12.677457 systemd[1]: Started cri-containerd-9a74a79de495d02e2d69c7b16f0169ce9cf4a6eedbbc0a4e0b98f229771c8f0d.scope. Aug 13 00:54:12.698884 env[1437]: time="2025-08-13T00:54:12.698820238Z" level=info msg="StartContainer for \"1fe6ab804f89dd927a0d06476c578b1abb71fbbb006fa3f6df76610075a085c4\" returns successfully" Aug 13 00:54:12.714263 env[1437]: time="2025-08-13T00:54:12.714210488Z" level=info msg="StartContainer for \"9a74a79de495d02e2d69c7b16f0169ce9cf4a6eedbbc0a4e0b98f229771c8f0d\" returns successfully" Aug 13 00:54:13.141065 kubelet[2368]: I0813 00:54:13.140985 2368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6w6f6" podStartSLOduration=26.140966085 podStartE2EDuration="26.140966085s" podCreationTimestamp="2025-08-13 00:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:13.139254357 +0000 UTC m=+32.754221523" watchObservedRunningTime="2025-08-13 00:54:13.140966085 +0000 UTC m=+32.755933251" Aug 13 00:54:13.402665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1026590362.mount: Deactivated successfully. Aug 13 00:55:46.089053 systemd[1]: Started sshd@5-10.200.4.32:22-10.200.16.10:57038.service. Aug 13 00:55:46.677018 sshd[3715]: Accepted publickey for core from 10.200.16.10 port 57038 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:55:46.678853 sshd[3715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:46.684202 systemd[1]: Started session-8.scope. Aug 13 00:55:46.684649 systemd-logind[1423]: New session 8 of user core. Aug 13 00:55:47.245670 sshd[3715]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:47.248530 systemd[1]: sshd@5-10.200.4.32:22-10.200.16.10:57038.service: Deactivated successfully. Aug 13 00:55:47.249481 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:55:47.250250 systemd-logind[1423]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:55:47.251093 systemd-logind[1423]: Removed session 8. Aug 13 00:55:52.345383 systemd[1]: Started sshd@6-10.200.4.32:22-10.200.16.10:56788.service. Aug 13 00:55:52.933337 sshd[3730]: Accepted publickey for core from 10.200.16.10 port 56788 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:55:52.935045 sshd[3730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:52.940654 systemd[1]: Started session-9.scope. Aug 13 00:55:52.941443 systemd-logind[1423]: New session 9 of user core. Aug 13 00:55:53.412922 sshd[3730]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:53.416297 systemd[1]: sshd@6-10.200.4.32:22-10.200.16.10:56788.service: Deactivated successfully. Aug 13 00:55:53.417424 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:55:53.418459 systemd-logind[1423]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:55:53.419424 systemd-logind[1423]: Removed session 9. Aug 13 00:55:58.513441 systemd[1]: Started sshd@7-10.200.4.32:22-10.200.16.10:56800.service. Aug 13 00:55:59.106619 sshd[3742]: Accepted publickey for core from 10.200.16.10 port 56800 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:55:59.108134 sshd[3742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:59.113008 systemd-logind[1423]: New session 10 of user core. Aug 13 00:55:59.113287 systemd[1]: Started session-10.scope. Aug 13 00:55:59.585233 sshd[3742]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:59.588769 systemd[1]: sshd@7-10.200.4.32:22-10.200.16.10:56800.service: Deactivated successfully. Aug 13 00:55:59.589628 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:55:59.590127 systemd-logind[1423]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:55:59.590871 systemd-logind[1423]: Removed session 10. Aug 13 00:56:04.685569 systemd[1]: Started sshd@8-10.200.4.32:22-10.200.16.10:55936.service. Aug 13 00:56:05.277724 sshd[3755]: Accepted publickey for core from 10.200.16.10 port 55936 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:05.279188 sshd[3755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:05.284060 systemd[1]: Started session-11.scope. Aug 13 00:56:05.284661 systemd-logind[1423]: New session 11 of user core. Aug 13 00:56:05.762469 sshd[3755]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:05.765581 systemd[1]: sshd@8-10.200.4.32:22-10.200.16.10:55936.service: Deactivated successfully. Aug 13 00:56:05.766541 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:56:05.767205 systemd-logind[1423]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:56:05.768038 systemd-logind[1423]: Removed session 11. Aug 13 00:56:10.862235 systemd[1]: Started sshd@9-10.200.4.32:22-10.200.16.10:59184.service. Aug 13 00:56:11.452406 sshd[3767]: Accepted publickey for core from 10.200.16.10 port 59184 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:11.453858 sshd[3767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:11.458799 systemd[1]: Started session-12.scope. Aug 13 00:56:11.459269 systemd-logind[1423]: New session 12 of user core. Aug 13 00:56:11.938967 sshd[3767]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:11.942250 systemd[1]: sshd@9-10.200.4.32:22-10.200.16.10:59184.service: Deactivated successfully. Aug 13 00:56:11.943661 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:56:11.943686 systemd-logind[1423]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:56:11.944758 systemd-logind[1423]: Removed session 12. Aug 13 00:56:12.039321 systemd[1]: Started sshd@10-10.200.4.32:22-10.200.16.10:59196.service. Aug 13 00:56:12.627696 sshd[3779]: Accepted publickey for core from 10.200.16.10 port 59196 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:12.629313 sshd[3779]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:12.634403 systemd[1]: Started session-13.scope. Aug 13 00:56:12.635054 systemd-logind[1423]: New session 13 of user core. Aug 13 00:56:13.158074 sshd[3779]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:13.161393 systemd[1]: sshd@10-10.200.4.32:22-10.200.16.10:59196.service: Deactivated successfully. Aug 13 00:56:13.162515 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:56:13.163353 systemd-logind[1423]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:56:13.164160 systemd-logind[1423]: Removed session 13. Aug 13 00:56:13.257760 systemd[1]: Started sshd@11-10.200.4.32:22-10.200.16.10:59208.service. Aug 13 00:56:13.844281 sshd[3789]: Accepted publickey for core from 10.200.16.10 port 59208 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:13.846085 sshd[3789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:13.852024 systemd-logind[1423]: New session 14 of user core. Aug 13 00:56:13.852550 systemd[1]: Started session-14.scope. Aug 13 00:56:14.335042 sshd[3789]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:14.338467 systemd[1]: sshd@11-10.200.4.32:22-10.200.16.10:59208.service: Deactivated successfully. Aug 13 00:56:14.339618 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:56:14.340519 systemd-logind[1423]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:56:14.341429 systemd-logind[1423]: Removed session 14. Aug 13 00:56:19.435422 systemd[1]: Started sshd@12-10.200.4.32:22-10.200.16.10:59220.service. Aug 13 00:56:20.027985 sshd[3803]: Accepted publickey for core from 10.200.16.10 port 59220 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:20.029782 sshd[3803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:20.035383 systemd[1]: Started session-15.scope. Aug 13 00:56:20.036026 systemd-logind[1423]: New session 15 of user core. Aug 13 00:56:20.505374 sshd[3803]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:20.508394 systemd[1]: sshd@12-10.200.4.32:22-10.200.16.10:59220.service: Deactivated successfully. Aug 13 00:56:20.509310 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:56:20.510088 systemd-logind[1423]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:56:20.510969 systemd-logind[1423]: Removed session 15. Aug 13 00:56:25.613319 systemd[1]: Started sshd@13-10.200.4.32:22-10.200.16.10:50600.service. Aug 13 00:56:26.205716 sshd[3814]: Accepted publickey for core from 10.200.16.10 port 50600 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:26.207438 sshd[3814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:26.211997 systemd-logind[1423]: New session 16 of user core. Aug 13 00:56:26.213083 systemd[1]: Started session-16.scope. Aug 13 00:56:26.685585 sshd[3814]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:26.689143 systemd[1]: sshd@13-10.200.4.32:22-10.200.16.10:50600.service: Deactivated successfully. Aug 13 00:56:26.690045 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:56:26.690709 systemd-logind[1423]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:56:26.691561 systemd-logind[1423]: Removed session 16. Aug 13 00:56:26.785861 systemd[1]: Started sshd@14-10.200.4.32:22-10.200.16.10:50608.service. Aug 13 00:56:27.377413 sshd[3826]: Accepted publickey for core from 10.200.16.10 port 50608 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:27.378831 sshd[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:27.383730 systemd[1]: Started session-17.scope. Aug 13 00:56:27.384397 systemd-logind[1423]: New session 17 of user core. Aug 13 00:56:27.974298 sshd[3826]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:27.977685 systemd[1]: sshd@14-10.200.4.32:22-10.200.16.10:50608.service: Deactivated successfully. Aug 13 00:56:27.978649 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:56:27.979434 systemd-logind[1423]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:56:27.980284 systemd-logind[1423]: Removed session 17. Aug 13 00:56:28.074165 systemd[1]: Started sshd@15-10.200.4.32:22-10.200.16.10:50620.service. Aug 13 00:56:28.665713 sshd[3836]: Accepted publickey for core from 10.200.16.10 port 50620 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:28.667322 sshd[3836]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:28.672646 systemd-logind[1423]: New session 18 of user core. Aug 13 00:56:28.673150 systemd[1]: Started session-18.scope. Aug 13 00:56:29.607437 sshd[3836]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:29.610883 systemd[1]: sshd@15-10.200.4.32:22-10.200.16.10:50620.service: Deactivated successfully. Aug 13 00:56:29.612043 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:56:29.613035 systemd-logind[1423]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:56:29.614031 systemd-logind[1423]: Removed session 18. Aug 13 00:56:29.708226 systemd[1]: Started sshd@16-10.200.4.32:22-10.200.16.10:50636.service. Aug 13 00:56:30.301724 sshd[3853]: Accepted publickey for core from 10.200.16.10 port 50636 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:30.303326 sshd[3853]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:30.308000 systemd-logind[1423]: New session 19 of user core. Aug 13 00:56:30.308826 systemd[1]: Started session-19.scope. Aug 13 00:56:30.891440 sshd[3853]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:30.894528 systemd[1]: sshd@16-10.200.4.32:22-10.200.16.10:50636.service: Deactivated successfully. Aug 13 00:56:30.895419 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:56:30.896498 systemd-logind[1423]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:56:30.898131 systemd-logind[1423]: Removed session 19. Aug 13 00:56:30.991925 systemd[1]: Started sshd@17-10.200.4.32:22-10.200.16.10:36762.service. Aug 13 00:56:31.585173 sshd[3862]: Accepted publickey for core from 10.200.16.10 port 36762 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:31.586741 sshd[3862]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:31.591787 systemd[1]: Started session-20.scope. Aug 13 00:56:31.592440 systemd-logind[1423]: New session 20 of user core. Aug 13 00:56:32.062220 sshd[3862]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:32.067283 systemd-logind[1423]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:56:32.067831 systemd[1]: sshd@17-10.200.4.32:22-10.200.16.10:36762.service: Deactivated successfully. Aug 13 00:56:32.068923 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:56:32.070369 systemd-logind[1423]: Removed session 20. Aug 13 00:56:37.174455 systemd[1]: Started sshd@18-10.200.4.32:22-10.200.16.10:36776.service. Aug 13 00:56:37.762260 sshd[3876]: Accepted publickey for core from 10.200.16.10 port 36776 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:37.763785 sshd[3876]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:37.768782 systemd[1]: Started session-21.scope. Aug 13 00:56:37.769260 systemd-logind[1423]: New session 21 of user core. Aug 13 00:56:38.240119 sshd[3876]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:38.243607 systemd[1]: sshd@18-10.200.4.32:22-10.200.16.10:36776.service: Deactivated successfully. Aug 13 00:56:38.244559 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:56:38.245235 systemd-logind[1423]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:56:38.246008 systemd-logind[1423]: Removed session 21. Aug 13 00:56:43.339803 systemd[1]: Started sshd@19-10.200.4.32:22-10.200.16.10:38608.service. Aug 13 00:56:43.927961 sshd[3890]: Accepted publickey for core from 10.200.16.10 port 38608 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:43.929716 sshd[3890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:43.934981 systemd[1]: Started session-22.scope. Aug 13 00:56:43.935444 systemd-logind[1423]: New session 22 of user core. Aug 13 00:56:44.413203 sshd[3890]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:44.416484 systemd[1]: sshd@19-10.200.4.32:22-10.200.16.10:38608.service: Deactivated successfully. Aug 13 00:56:44.417713 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:56:44.418556 systemd-logind[1423]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:56:44.419548 systemd-logind[1423]: Removed session 22. Aug 13 00:56:49.513982 systemd[1]: Started sshd@20-10.200.4.32:22-10.200.16.10:38614.service. Aug 13 00:56:50.107381 sshd[3905]: Accepted publickey for core from 10.200.16.10 port 38614 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:50.109133 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:50.114565 systemd[1]: Started session-23.scope. Aug 13 00:56:50.115197 systemd-logind[1423]: New session 23 of user core. Aug 13 00:56:50.593616 sshd[3905]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:50.597096 systemd[1]: sshd@20-10.200.4.32:22-10.200.16.10:38614.service: Deactivated successfully. Aug 13 00:56:50.598231 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:56:50.599127 systemd-logind[1423]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:56:50.600241 systemd-logind[1423]: Removed session 23. Aug 13 00:56:50.693903 systemd[1]: Started sshd@21-10.200.4.32:22-10.200.16.10:40106.service. Aug 13 00:56:51.285972 sshd[3917]: Accepted publickey for core from 10.200.16.10 port 40106 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:51.287851 sshd[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:51.293233 systemd[1]: Started session-24.scope. Aug 13 00:56:51.293846 systemd-logind[1423]: New session 24 of user core. Aug 13 00:56:52.889871 kubelet[2368]: I0813 00:56:52.889766 2368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qrlnq" podStartSLOduration=185.889699794 podStartE2EDuration="3m5.889699794s" podCreationTimestamp="2025-08-13 00:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:13.174731123 +0000 UTC m=+32.789698389" watchObservedRunningTime="2025-08-13 00:56:52.889699794 +0000 UTC m=+192.504666960" Aug 13 00:56:52.923650 systemd[1]: run-containerd-runc-k8s.io-56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549-runc.4XFckQ.mount: Deactivated successfully. Aug 13 00:56:52.952122 env[1437]: time="2025-08-13T00:56:52.952073331Z" level=info msg="StopContainer for \"36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06\" with timeout 30 (s)" Aug 13 00:56:52.953109 env[1437]: time="2025-08-13T00:56:52.953077953Z" level=info msg="Stop container \"36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06\" with signal terminated" Aug 13 00:56:52.953385 env[1437]: time="2025-08-13T00:56:52.953120354Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:56:52.964050 env[1437]: time="2025-08-13T00:56:52.964020788Z" level=info msg="StopContainer for \"56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549\" with timeout 2 (s)" Aug 13 00:56:52.964544 env[1437]: time="2025-08-13T00:56:52.964508998Z" level=info msg="Stop container \"56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549\" with signal terminated" Aug 13 00:56:52.971003 systemd[1]: cri-containerd-36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06.scope: Deactivated successfully. Aug 13 00:56:52.975376 systemd-networkd[1594]: lxc_health: Link DOWN Aug 13 00:56:52.975386 systemd-networkd[1594]: lxc_health: Lost carrier Aug 13 00:56:53.000498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06-rootfs.mount: Deactivated successfully. Aug 13 00:56:53.005698 systemd[1]: cri-containerd-56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549.scope: Deactivated successfully. Aug 13 00:56:53.005981 systemd[1]: cri-containerd-56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549.scope: Consumed 6.824s CPU time. Aug 13 00:56:53.026985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549-rootfs.mount: Deactivated successfully. Aug 13 00:56:53.059312 env[1437]: time="2025-08-13T00:56:53.059262520Z" level=info msg="shim disconnected" id=36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06 Aug 13 00:56:53.059312 env[1437]: time="2025-08-13T00:56:53.059309921Z" level=warning msg="cleaning up after shim disconnected" id=36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06 namespace=k8s.io Aug 13 00:56:53.059588 env[1437]: time="2025-08-13T00:56:53.059323922Z" level=info msg="cleaning up dead shim" Aug 13 00:56:53.059588 env[1437]: time="2025-08-13T00:56:53.059547727Z" level=info msg="shim disconnected" id=56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549 Aug 13 00:56:53.059588 env[1437]: time="2025-08-13T00:56:53.059581227Z" level=warning msg="cleaning up after shim disconnected" id=56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549 namespace=k8s.io Aug 13 00:56:53.059724 env[1437]: time="2025-08-13T00:56:53.059591527Z" level=info msg="cleaning up dead shim" Aug 13 00:56:53.073873 env[1437]: time="2025-08-13T00:56:53.073823430Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3989 runtime=io.containerd.runc.v2\n" Aug 13 00:56:53.075860 env[1437]: time="2025-08-13T00:56:53.075818073Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3990 runtime=io.containerd.runc.v2\n" Aug 13 00:56:53.080333 env[1437]: time="2025-08-13T00:56:53.080300468Z" level=info msg="StopContainer for \"36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06\" returns successfully" Aug 13 00:56:53.081040 env[1437]: time="2025-08-13T00:56:53.081012983Z" level=info msg="StopPodSandbox for \"928f62d62bb89784a48c8726cfbb4c62a02aa255dbb10a8437f4ff168e3424fc\"" Aug 13 00:56:53.081215 env[1437]: time="2025-08-13T00:56:53.081079485Z" level=info msg="Container to stop \"36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:53.081357 env[1437]: time="2025-08-13T00:56:53.081329590Z" level=info msg="StopContainer for \"56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549\" returns successfully" Aug 13 00:56:53.081960 env[1437]: time="2025-08-13T00:56:53.081920203Z" level=info msg="StopPodSandbox for \"430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d\"" Aug 13 00:56:53.082176 env[1437]: time="2025-08-13T00:56:53.082149807Z" level=info msg="Container to stop \"c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:53.082287 env[1437]: time="2025-08-13T00:56:53.082263010Z" level=info msg="Container to stop \"997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:53.082359 env[1437]: time="2025-08-13T00:56:53.082284610Z" level=info msg="Container to stop \"56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:53.082359 env[1437]: time="2025-08-13T00:56:53.082301111Z" level=info msg="Container to stop \"9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:53.082359 env[1437]: time="2025-08-13T00:56:53.082317111Z" level=info msg="Container to stop \"58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:53.089574 systemd[1]: cri-containerd-928f62d62bb89784a48c8726cfbb4c62a02aa255dbb10a8437f4ff168e3424fc.scope: Deactivated successfully. Aug 13 00:56:53.093760 systemd[1]: cri-containerd-430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d.scope: Deactivated successfully. Aug 13 00:56:53.128169 env[1437]: time="2025-08-13T00:56:53.128119986Z" level=info msg="shim disconnected" id=430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d Aug 13 00:56:53.128671 env[1437]: time="2025-08-13T00:56:53.128642897Z" level=warning msg="cleaning up after shim disconnected" id=430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d namespace=k8s.io Aug 13 00:56:53.128820 env[1437]: time="2025-08-13T00:56:53.128804800Z" level=info msg="cleaning up dead shim" Aug 13 00:56:53.129132 env[1437]: time="2025-08-13T00:56:53.128582396Z" level=info msg="shim disconnected" id=928f62d62bb89784a48c8726cfbb4c62a02aa255dbb10a8437f4ff168e3424fc Aug 13 00:56:53.129255 env[1437]: time="2025-08-13T00:56:53.129236809Z" level=warning msg="cleaning up after shim disconnected" id=928f62d62bb89784a48c8726cfbb4c62a02aa255dbb10a8437f4ff168e3424fc namespace=k8s.io Aug 13 00:56:53.129337 env[1437]: time="2025-08-13T00:56:53.129324611Z" level=info msg="cleaning up dead shim" Aug 13 00:56:53.142845 env[1437]: time="2025-08-13T00:56:53.142729397Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4056 runtime=io.containerd.runc.v2\n" Aug 13 00:56:53.144124 env[1437]: time="2025-08-13T00:56:53.144091426Z" level=info msg="TearDown network for sandbox \"928f62d62bb89784a48c8726cfbb4c62a02aa255dbb10a8437f4ff168e3424fc\" successfully" Aug 13 00:56:53.144261 env[1437]: time="2025-08-13T00:56:53.144239029Z" level=info msg="StopPodSandbox for \"928f62d62bb89784a48c8726cfbb4c62a02aa255dbb10a8437f4ff168e3424fc\" returns successfully" Aug 13 00:56:53.145102 env[1437]: time="2025-08-13T00:56:53.144923143Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4055 runtime=io.containerd.runc.v2\n" Aug 13 00:56:53.145347 env[1437]: time="2025-08-13T00:56:53.145297351Z" level=info msg="TearDown network for sandbox \"430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d\" successfully" Aug 13 00:56:53.145347 env[1437]: time="2025-08-13T00:56:53.145325952Z" level=info msg="StopPodSandbox for \"430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d\" returns successfully" Aug 13 00:56:53.242845 kubelet[2368]: I0813 00:56:53.242791 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2be47590-73e6-4560-b4e0-3dcb1e538eee-cilium-config-path\") pod \"2be47590-73e6-4560-b4e0-3dcb1e538eee\" (UID: \"2be47590-73e6-4560-b4e0-3dcb1e538eee\") " Aug 13 00:56:53.244249 kubelet[2368]: I0813 00:56:53.244212 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9c77148-164b-49db-a560-04f26bdb3fb5-hubble-tls\") pod \"f9c77148-164b-49db-a560-04f26bdb3fb5\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " Aug 13 00:56:53.244458 kubelet[2368]: I0813 00:56:53.244440 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-cilium-run\") pod \"f9c77148-164b-49db-a560-04f26bdb3fb5\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " Aug 13 00:56:53.244579 kubelet[2368]: I0813 00:56:53.244562 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-host-proc-sys-net\") pod \"f9c77148-164b-49db-a560-04f26bdb3fb5\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " Aug 13 00:56:53.244690 kubelet[2368]: I0813 00:56:53.244674 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-lib-modules\") pod \"f9c77148-164b-49db-a560-04f26bdb3fb5\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " Aug 13 00:56:53.244801 kubelet[2368]: I0813 00:56:53.244785 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-cilium-cgroup\") pod \"f9c77148-164b-49db-a560-04f26bdb3fb5\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " Aug 13 00:56:53.244992 kubelet[2368]: I0813 00:56:53.244967 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f9c77148-164b-49db-a560-04f26bdb3fb5" (UID: "f9c77148-164b-49db-a560-04f26bdb3fb5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:53.246081 kubelet[2368]: I0813 00:56:53.246052 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f9c77148-164b-49db-a560-04f26bdb3fb5" (UID: "f9c77148-164b-49db-a560-04f26bdb3fb5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:53.246333 kubelet[2368]: I0813 00:56:53.246308 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f9c77148-164b-49db-a560-04f26bdb3fb5" (UID: "f9c77148-164b-49db-a560-04f26bdb3fb5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:53.246493 kubelet[2368]: I0813 00:56:53.246019 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f9c77148-164b-49db-a560-04f26bdb3fb5" (UID: "f9c77148-164b-49db-a560-04f26bdb3fb5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:53.246802 kubelet[2368]: I0813 00:56:53.246771 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2be47590-73e6-4560-b4e0-3dcb1e538eee-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2be47590-73e6-4560-b4e0-3dcb1e538eee" (UID: "2be47590-73e6-4560-b4e0-3dcb1e538eee"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:56:53.250273 kubelet[2368]: I0813 00:56:53.250237 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c77148-164b-49db-a560-04f26bdb3fb5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f9c77148-164b-49db-a560-04f26bdb3fb5" (UID: "f9c77148-164b-49db-a560-04f26bdb3fb5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:56:53.345769 kubelet[2368]: I0813 00:56:53.345717 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-xtables-lock\") pod \"f9c77148-164b-49db-a560-04f26bdb3fb5\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " Aug 13 00:56:53.345769 kubelet[2368]: I0813 00:56:53.345778 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxmqg\" (UniqueName: \"kubernetes.io/projected/f9c77148-164b-49db-a560-04f26bdb3fb5-kube-api-access-vxmqg\") pod \"f9c77148-164b-49db-a560-04f26bdb3fb5\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " Aug 13 00:56:53.346133 kubelet[2368]: I0813 00:56:53.345810 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-etc-cni-netd\") pod \"f9c77148-164b-49db-a560-04f26bdb3fb5\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " Aug 13 00:56:53.346133 kubelet[2368]: I0813 00:56:53.345840 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9c77148-164b-49db-a560-04f26bdb3fb5-clustermesh-secrets\") pod \"f9c77148-164b-49db-a560-04f26bdb3fb5\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " Aug 13 00:56:53.346133 kubelet[2368]: I0813 00:56:53.345865 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg9s4\" (UniqueName: \"kubernetes.io/projected/2be47590-73e6-4560-b4e0-3dcb1e538eee-kube-api-access-bg9s4\") pod \"2be47590-73e6-4560-b4e0-3dcb1e538eee\" (UID: \"2be47590-73e6-4560-b4e0-3dcb1e538eee\") " Aug 13 00:56:53.346133 kubelet[2368]: I0813 00:56:53.345893 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9c77148-164b-49db-a560-04f26bdb3fb5-cilium-config-path\") pod \"f9c77148-164b-49db-a560-04f26bdb3fb5\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " Aug 13 00:56:53.346133 kubelet[2368]: I0813 00:56:53.345915 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-hostproc\") pod \"f9c77148-164b-49db-a560-04f26bdb3fb5\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " Aug 13 00:56:53.346430 kubelet[2368]: I0813 00:56:53.346408 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-host-proc-sys-kernel\") pod \"f9c77148-164b-49db-a560-04f26bdb3fb5\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " Aug 13 00:56:53.346525 kubelet[2368]: I0813 00:56:53.346451 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-cni-path\") pod \"f9c77148-164b-49db-a560-04f26bdb3fb5\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " Aug 13 00:56:53.346525 kubelet[2368]: I0813 00:56:53.346484 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-bpf-maps\") pod \"f9c77148-164b-49db-a560-04f26bdb3fb5\" (UID: \"f9c77148-164b-49db-a560-04f26bdb3fb5\") " Aug 13 00:56:53.346636 kubelet[2368]: I0813 00:56:53.346562 2368 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2be47590-73e6-4560-b4e0-3dcb1e538eee-cilium-config-path\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:53.346636 kubelet[2368]: I0813 00:56:53.346582 2368 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9c77148-164b-49db-a560-04f26bdb3fb5-hubble-tls\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:53.346636 kubelet[2368]: I0813 00:56:53.346599 2368 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-cilium-run\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:53.346636 kubelet[2368]: I0813 00:56:53.346616 2368 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-host-proc-sys-net\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:53.346847 kubelet[2368]: I0813 00:56:53.346637 2368 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-lib-modules\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:53.346847 kubelet[2368]: I0813 00:56:53.346654 2368 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-cilium-cgroup\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:53.346847 kubelet[2368]: I0813 00:56:53.346692 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f9c77148-164b-49db-a560-04f26bdb3fb5" (UID: "f9c77148-164b-49db-a560-04f26bdb3fb5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:53.347407 kubelet[2368]: I0813 00:56:53.347371 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f9c77148-164b-49db-a560-04f26bdb3fb5" (UID: "f9c77148-164b-49db-a560-04f26bdb3fb5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:53.351104 kubelet[2368]: I0813 00:56:53.351068 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9c77148-164b-49db-a560-04f26bdb3fb5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f9c77148-164b-49db-a560-04f26bdb3fb5" (UID: "f9c77148-164b-49db-a560-04f26bdb3fb5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:56:53.351227 kubelet[2368]: I0813 00:56:53.351137 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-hostproc" (OuterVolumeSpecName: "hostproc") pod "f9c77148-164b-49db-a560-04f26bdb3fb5" (UID: "f9c77148-164b-49db-a560-04f26bdb3fb5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:53.351227 kubelet[2368]: I0813 00:56:53.351159 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f9c77148-164b-49db-a560-04f26bdb3fb5" (UID: "f9c77148-164b-49db-a560-04f26bdb3fb5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:53.351227 kubelet[2368]: I0813 00:56:53.351180 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-cni-path" (OuterVolumeSpecName: "cni-path") pod "f9c77148-164b-49db-a560-04f26bdb3fb5" (UID: "f9c77148-164b-49db-a560-04f26bdb3fb5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:53.351575 kubelet[2368]: I0813 00:56:53.351549 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f9c77148-164b-49db-a560-04f26bdb3fb5" (UID: "f9c77148-164b-49db-a560-04f26bdb3fb5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:53.353310 kubelet[2368]: I0813 00:56:53.353281 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9c77148-164b-49db-a560-04f26bdb3fb5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f9c77148-164b-49db-a560-04f26bdb3fb5" (UID: "f9c77148-164b-49db-a560-04f26bdb3fb5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:56:53.354433 kubelet[2368]: I0813 00:56:53.354401 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2be47590-73e6-4560-b4e0-3dcb1e538eee-kube-api-access-bg9s4" (OuterVolumeSpecName: "kube-api-access-bg9s4") pod "2be47590-73e6-4560-b4e0-3dcb1e538eee" (UID: "2be47590-73e6-4560-b4e0-3dcb1e538eee"). InnerVolumeSpecName "kube-api-access-bg9s4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:56:53.355639 kubelet[2368]: I0813 00:56:53.355611 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c77148-164b-49db-a560-04f26bdb3fb5-kube-api-access-vxmqg" (OuterVolumeSpecName: "kube-api-access-vxmqg") pod "f9c77148-164b-49db-a560-04f26bdb3fb5" (UID: "f9c77148-164b-49db-a560-04f26bdb3fb5"). InnerVolumeSpecName "kube-api-access-vxmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:56:53.447356 kubelet[2368]: I0813 00:56:53.447227 2368 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-xtables-lock\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:53.447605 kubelet[2368]: I0813 00:56:53.447580 2368 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vxmqg\" (UniqueName: \"kubernetes.io/projected/f9c77148-164b-49db-a560-04f26bdb3fb5-kube-api-access-vxmqg\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:53.449758 kubelet[2368]: I0813 00:56:53.449730 2368 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-etc-cni-netd\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:53.449884 kubelet[2368]: I0813 00:56:53.449762 2368 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9c77148-164b-49db-a560-04f26bdb3fb5-clustermesh-secrets\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:53.449884 kubelet[2368]: I0813 00:56:53.449780 2368 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-cni-path\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:53.449884 kubelet[2368]: I0813 00:56:53.449795 2368 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bg9s4\" (UniqueName: \"kubernetes.io/projected/2be47590-73e6-4560-b4e0-3dcb1e538eee-kube-api-access-bg9s4\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:53.449884 kubelet[2368]: I0813 00:56:53.449812 2368 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9c77148-164b-49db-a560-04f26bdb3fb5-cilium-config-path\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:53.449884 kubelet[2368]: I0813 00:56:53.449829 2368 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-hostproc\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:53.449884 kubelet[2368]: I0813 00:56:53.449844 2368 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-host-proc-sys-kernel\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:53.449884 kubelet[2368]: I0813 00:56:53.449859 2368 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9c77148-164b-49db-a560-04f26bdb3fb5-bpf-maps\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:53.462693 kubelet[2368]: I0813 00:56:53.462670 2368 scope.go:117] "RemoveContainer" containerID="36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06" Aug 13 00:56:53.465067 env[1437]: time="2025-08-13T00:56:53.464515144Z" level=info msg="RemoveContainer for \"36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06\"" Aug 13 00:56:53.470620 systemd[1]: Removed slice kubepods-besteffort-pod2be47590_73e6_4560_b4e0_3dcb1e538eee.slice. Aug 13 00:56:53.475864 systemd[1]: Removed slice kubepods-burstable-podf9c77148_164b_49db_a560_04f26bdb3fb5.slice. Aug 13 00:56:53.476020 systemd[1]: kubepods-burstable-podf9c77148_164b_49db_a560_04f26bdb3fb5.slice: Consumed 6.938s CPU time. Aug 13 00:56:53.478412 env[1437]: time="2025-08-13T00:56:53.478293437Z" level=info msg="RemoveContainer for \"36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06\" returns successfully" Aug 13 00:56:53.478659 kubelet[2368]: I0813 00:56:53.478644 2368 scope.go:117] "RemoveContainer" containerID="36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06" Aug 13 00:56:53.479109 env[1437]: time="2025-08-13T00:56:53.478969651Z" level=error msg="ContainerStatus for \"36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06\": not found" Aug 13 00:56:53.479458 kubelet[2368]: E0813 00:56:53.479430 2368 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06\": not found" containerID="36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06" Aug 13 00:56:53.479614 kubelet[2368]: I0813 00:56:53.479467 2368 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06"} err="failed to get container status \"36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06\": rpc error: code = NotFound desc = an error occurred when try to find container \"36b4861a89ab0d34bb2ce4ba58b49dd41f829903058922088b63c444c5380b06\": not found" Aug 13 00:56:53.479678 kubelet[2368]: I0813 00:56:53.479620 2368 scope.go:117] "RemoveContainer" containerID="56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549" Aug 13 00:56:53.480984 env[1437]: time="2025-08-13T00:56:53.480761589Z" level=info msg="RemoveContainer for \"56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549\"" Aug 13 00:56:53.488025 env[1437]: time="2025-08-13T00:56:53.487993743Z" level=info msg="RemoveContainer for \"56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549\" returns successfully" Aug 13 00:56:53.488176 kubelet[2368]: I0813 00:56:53.488152 2368 scope.go:117] "RemoveContainer" containerID="997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396" Aug 13 00:56:53.489170 env[1437]: time="2025-08-13T00:56:53.489141668Z" level=info msg="RemoveContainer for \"997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396\"" Aug 13 00:56:53.498169 env[1437]: time="2025-08-13T00:56:53.497056936Z" level=info msg="RemoveContainer for \"997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396\" returns successfully" Aug 13 00:56:53.498279 kubelet[2368]: I0813 00:56:53.497214 2368 scope.go:117] "RemoveContainer" containerID="58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2" Aug 13 00:56:53.498365 env[1437]: time="2025-08-13T00:56:53.498189960Z" level=info msg="RemoveContainer for \"58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2\"" Aug 13 00:56:53.505660 env[1437]: time="2025-08-13T00:56:53.505618418Z" level=info msg="RemoveContainer for \"58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2\" returns successfully" Aug 13 00:56:53.506624 kubelet[2368]: I0813 00:56:53.506608 2368 scope.go:117] "RemoveContainer" containerID="c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513" Aug 13 00:56:53.507840 env[1437]: time="2025-08-13T00:56:53.507811965Z" level=info msg="RemoveContainer for \"c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513\"" Aug 13 00:56:53.514863 env[1437]: time="2025-08-13T00:56:53.514829514Z" level=info msg="RemoveContainer for \"c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513\" returns successfully" Aug 13 00:56:53.515027 kubelet[2368]: I0813 00:56:53.515007 2368 scope.go:117] "RemoveContainer" containerID="9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3" Aug 13 00:56:53.515995 env[1437]: time="2025-08-13T00:56:53.515969439Z" level=info msg="RemoveContainer for \"9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3\"" Aug 13 00:56:53.523361 env[1437]: time="2025-08-13T00:56:53.523323695Z" level=info msg="RemoveContainer for \"9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3\" returns successfully" Aug 13 00:56:53.523507 kubelet[2368]: I0813 00:56:53.523486 2368 scope.go:117] "RemoveContainer" containerID="56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549" Aug 13 00:56:53.523706 env[1437]: time="2025-08-13T00:56:53.523650702Z" level=error msg="ContainerStatus for \"56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549\": not found" Aug 13 00:56:53.523832 kubelet[2368]: E0813 00:56:53.523806 2368 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549\": not found" containerID="56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549" Aug 13 00:56:53.523906 kubelet[2368]: I0813 00:56:53.523839 2368 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549"} err="failed to get container status \"56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549\": rpc error: code = NotFound desc = an error occurred when try to find container \"56548f7bf866bed0115c7bf712639ecef021b4d835413e9be6befeca4a117549\": not found" Aug 13 00:56:53.523906 kubelet[2368]: I0813 00:56:53.523863 2368 scope.go:117] "RemoveContainer" containerID="997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396" Aug 13 00:56:53.524108 env[1437]: time="2025-08-13T00:56:53.524058911Z" level=error msg="ContainerStatus for \"997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396\": not found" Aug 13 00:56:53.524289 kubelet[2368]: E0813 00:56:53.524253 2368 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396\": not found" containerID="997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396" Aug 13 00:56:53.524378 kubelet[2368]: I0813 00:56:53.524285 2368 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396"} err="failed to get container status \"997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396\": rpc error: code = NotFound desc = an error occurred when try to find container \"997770d066a6a35069116720a8a79cf2cf1a8eaa26baf5e6ea6480bdfe0a6396\": not found" Aug 13 00:56:53.524378 kubelet[2368]: I0813 00:56:53.524307 2368 scope.go:117] "RemoveContainer" containerID="58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2" Aug 13 00:56:53.524555 env[1437]: time="2025-08-13T00:56:53.524493520Z" level=error msg="ContainerStatus for \"58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2\": not found" Aug 13 00:56:53.524722 kubelet[2368]: E0813 00:56:53.524682 2368 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2\": not found" containerID="58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2" Aug 13 00:56:53.524808 kubelet[2368]: I0813 00:56:53.524731 2368 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2"} err="failed to get container status \"58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2\": rpc error: code = NotFound desc = an error occurred when try to find container \"58ec68a5e5aceb9680fd4a487aa046c0977870a5f561ff0e12143cbcbe292cc2\": not found" Aug 13 00:56:53.524808 kubelet[2368]: I0813 00:56:53.524752 2368 scope.go:117] "RemoveContainer" containerID="c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513" Aug 13 00:56:53.525019 env[1437]: time="2025-08-13T00:56:53.524974830Z" level=error msg="ContainerStatus for \"c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513\": not found" Aug 13 00:56:53.525145 kubelet[2368]: E0813 00:56:53.525123 2368 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513\": not found" containerID="c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513" Aug 13 00:56:53.525244 kubelet[2368]: I0813 00:56:53.525157 2368 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513"} err="failed to get container status \"c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4708ab50ad6e1dd11d7220873124f656bf9731aa352e30d26e720f9ac362513\": not found" Aug 13 00:56:53.525244 kubelet[2368]: I0813 00:56:53.525179 2368 scope.go:117] "RemoveContainer" containerID="9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3" Aug 13 00:56:53.525450 env[1437]: time="2025-08-13T00:56:53.525404139Z" level=error msg="ContainerStatus for \"9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3\": not found" Aug 13 00:56:53.525571 kubelet[2368]: E0813 00:56:53.525549 2368 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3\": not found" containerID="9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3" Aug 13 00:56:53.525637 kubelet[2368]: I0813 00:56:53.525579 2368 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3"} err="failed to get container status \"9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f1ce19b81dff004a4276ca2962012db2bc02a8939916da9e2c8adbc8bc96da3\": not found" Aug 13 00:56:53.913863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-928f62d62bb89784a48c8726cfbb4c62a02aa255dbb10a8437f4ff168e3424fc-rootfs.mount: Deactivated successfully. Aug 13 00:56:53.914284 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-928f62d62bb89784a48c8726cfbb4c62a02aa255dbb10a8437f4ff168e3424fc-shm.mount: Deactivated successfully. Aug 13 00:56:53.914515 systemd[1]: var-lib-kubelet-pods-2be47590\x2d73e6\x2d4560\x2db4e0\x2d3dcb1e538eee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbg9s4.mount: Deactivated successfully. Aug 13 00:56:53.914730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d-rootfs.mount: Deactivated successfully. Aug 13 00:56:53.914811 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-430a5aab684a6c00ad9de55c5acdca15270200d4e265bf47f6e2d8739ccee58d-shm.mount: Deactivated successfully. Aug 13 00:56:53.914895 systemd[1]: var-lib-kubelet-pods-f9c77148\x2d164b\x2d49db\x2da560\x2d04f26bdb3fb5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:56:53.914992 systemd[1]: var-lib-kubelet-pods-f9c77148\x2d164b\x2d49db\x2da560\x2d04f26bdb3fb5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvxmqg.mount: Deactivated successfully. Aug 13 00:56:53.915078 systemd[1]: var-lib-kubelet-pods-f9c77148\x2d164b\x2d49db\x2da560\x2d04f26bdb3fb5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:56:54.953805 sshd[3917]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:54.957556 systemd[1]: sshd@21-10.200.4.32:22-10.200.16.10:40106.service: Deactivated successfully. Aug 13 00:56:54.958566 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:56:54.959293 systemd-logind[1423]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:56:54.960125 systemd-logind[1423]: Removed session 24. Aug 13 00:56:55.003524 kubelet[2368]: I0813 00:56:55.003486 2368 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2be47590-73e6-4560-b4e0-3dcb1e538eee" path="/var/lib/kubelet/pods/2be47590-73e6-4560-b4e0-3dcb1e538eee/volumes" Aug 13 00:56:55.004111 kubelet[2368]: I0813 00:56:55.004083 2368 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9c77148-164b-49db-a560-04f26bdb3fb5" path="/var/lib/kubelet/pods/f9c77148-164b-49db-a560-04f26bdb3fb5/volumes" Aug 13 00:56:55.053170 systemd[1]: Started sshd@22-10.200.4.32:22-10.200.16.10:40108.service. Aug 13 00:56:55.648794 sshd[4087]: Accepted publickey for core from 10.200.16.10 port 40108 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:55.650529 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:55.655634 systemd-logind[1423]: New session 25 of user core. Aug 13 00:56:55.656140 systemd[1]: Started session-25.scope. Aug 13 00:56:56.140020 kubelet[2368]: E0813 00:56:56.139981 2368 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:56:56.786197 kubelet[2368]: I0813 00:56:56.786164 2368 memory_manager.go:355] "RemoveStaleState removing state" podUID="f9c77148-164b-49db-a560-04f26bdb3fb5" containerName="cilium-agent" Aug 13 00:56:56.786441 kubelet[2368]: I0813 00:56:56.786424 2368 memory_manager.go:355] "RemoveStaleState removing state" podUID="2be47590-73e6-4560-b4e0-3dcb1e538eee" containerName="cilium-operator" Aug 13 00:56:56.793698 systemd[1]: Created slice kubepods-burstable-pod0bf0432f_ffcd_4ebb_bd14_0072fde6bf28.slice. Aug 13 00:56:56.825859 sshd[4087]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:56.829088 systemd[1]: sshd@22-10.200.4.32:22-10.200.16.10:40108.service: Deactivated successfully. Aug 13 00:56:56.831070 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:56:56.832258 systemd-logind[1423]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:56:56.833886 systemd-logind[1423]: Removed session 25. Aug 13 00:56:56.869405 kubelet[2368]: I0813 00:56:56.869366 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cilium-cgroup\") pod \"cilium-q7hgk\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " pod="kube-system/cilium-q7hgk" Aug 13 00:56:56.869688 kubelet[2368]: I0813 00:56:56.869663 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-xtables-lock\") pod \"cilium-q7hgk\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " pod="kube-system/cilium-q7hgk" Aug 13 00:56:56.869839 kubelet[2368]: I0813 00:56:56.869823 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-clustermesh-secrets\") pod \"cilium-q7hgk\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " pod="kube-system/cilium-q7hgk" Aug 13 00:56:56.869996 kubelet[2368]: I0813 00:56:56.869976 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cilium-config-path\") pod \"cilium-q7hgk\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " pod="kube-system/cilium-q7hgk" Aug 13 00:56:56.870120 kubelet[2368]: I0813 00:56:56.870107 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-hostproc\") pod \"cilium-q7hgk\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " pod="kube-system/cilium-q7hgk" Aug 13 00:56:56.870235 kubelet[2368]: I0813 00:56:56.870221 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwq2f\" (UniqueName: \"kubernetes.io/projected/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-kube-api-access-nwq2f\") pod \"cilium-q7hgk\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " pod="kube-system/cilium-q7hgk" Aug 13 00:56:56.870346 kubelet[2368]: I0813 00:56:56.870334 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-host-proc-sys-net\") pod \"cilium-q7hgk\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " pod="kube-system/cilium-q7hgk" Aug 13 00:56:56.870461 kubelet[2368]: I0813 00:56:56.870449 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-hubble-tls\") pod \"cilium-q7hgk\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " pod="kube-system/cilium-q7hgk" Aug 13 00:56:56.870585 kubelet[2368]: I0813 00:56:56.870572 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-etc-cni-netd\") pod \"cilium-q7hgk\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " pod="kube-system/cilium-q7hgk" Aug 13 00:56:56.870697 kubelet[2368]: I0813 00:56:56.870682 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-host-proc-sys-kernel\") pod \"cilium-q7hgk\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " pod="kube-system/cilium-q7hgk" Aug 13 00:56:56.870810 kubelet[2368]: I0813 00:56:56.870798 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cni-path\") pod \"cilium-q7hgk\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " pod="kube-system/cilium-q7hgk" Aug 13 00:56:56.870918 kubelet[2368]: I0813 00:56:56.870904 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-lib-modules\") pod \"cilium-q7hgk\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " pod="kube-system/cilium-q7hgk" Aug 13 00:56:56.871052 kubelet[2368]: I0813 00:56:56.871038 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-bpf-maps\") pod \"cilium-q7hgk\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " pod="kube-system/cilium-q7hgk" Aug 13 00:56:56.871179 kubelet[2368]: I0813 00:56:56.871167 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cilium-run\") pod \"cilium-q7hgk\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " pod="kube-system/cilium-q7hgk" Aug 13 00:56:56.871290 kubelet[2368]: I0813 00:56:56.871278 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cilium-ipsec-secrets\") pod \"cilium-q7hgk\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " pod="kube-system/cilium-q7hgk" Aug 13 00:56:56.924101 systemd[1]: Started sshd@23-10.200.4.32:22-10.200.16.10:40120.service. Aug 13 00:56:57.100892 env[1437]: time="2025-08-13T00:56:57.100752784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q7hgk,Uid:0bf0432f-ffcd-4ebb-bd14-0072fde6bf28,Namespace:kube-system,Attempt:0,}" Aug 13 00:56:57.135960 env[1437]: time="2025-08-13T00:56:57.135868609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:56:57.135960 env[1437]: time="2025-08-13T00:56:57.135908710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:56:57.136218 env[1437]: time="2025-08-13T00:56:57.136168815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:56:57.136543 env[1437]: time="2025-08-13T00:56:57.136493822Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b78d238c2c92d09cc0a946f5676538c54fcfee12de36b17c8da5b10c871b489 pid=4112 runtime=io.containerd.runc.v2 Aug 13 00:56:57.153893 systemd[1]: Started cri-containerd-8b78d238c2c92d09cc0a946f5676538c54fcfee12de36b17c8da5b10c871b489.scope. Aug 13 00:56:57.181847 env[1437]: time="2025-08-13T00:56:57.181808557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q7hgk,Uid:0bf0432f-ffcd-4ebb-bd14-0072fde6bf28,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b78d238c2c92d09cc0a946f5676538c54fcfee12de36b17c8da5b10c871b489\"" Aug 13 00:56:57.185702 env[1437]: time="2025-08-13T00:56:57.185649736Z" level=info msg="CreateContainer within sandbox \"8b78d238c2c92d09cc0a946f5676538c54fcfee12de36b17c8da5b10c871b489\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:56:57.209868 env[1437]: time="2025-08-13T00:56:57.209826335Z" level=info msg="CreateContainer within sandbox \"8b78d238c2c92d09cc0a946f5676538c54fcfee12de36b17c8da5b10c871b489\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad\"" Aug 13 00:56:57.219095 env[1437]: time="2025-08-13T00:56:57.219065126Z" level=info msg="StartContainer for \"0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad\"" Aug 13 00:56:57.242901 systemd[1]: Started cri-containerd-0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad.scope. Aug 13 00:56:57.253200 systemd[1]: cri-containerd-0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad.scope: Deactivated successfully. Aug 13 00:56:57.315372 env[1437]: time="2025-08-13T00:56:57.315307312Z" level=info msg="shim disconnected" id=0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad Aug 13 00:56:57.315372 env[1437]: time="2025-08-13T00:56:57.315372914Z" level=warning msg="cleaning up after shim disconnected" id=0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad namespace=k8s.io Aug 13 00:56:57.315372 env[1437]: time="2025-08-13T00:56:57.315384814Z" level=info msg="cleaning up dead shim" Aug 13 00:56:57.323997 env[1437]: time="2025-08-13T00:56:57.323951291Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4168 runtime=io.containerd.runc.v2\ntime=\"2025-08-13T00:56:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Aug 13 00:56:57.324323 env[1437]: time="2025-08-13T00:56:57.324218496Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Aug 13 00:56:57.324821 env[1437]: time="2025-08-13T00:56:57.324774208Z" level=error msg="Failed to pipe stdout of container \"0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad\"" error="reading from a closed fifo" Aug 13 00:56:57.325040 env[1437]: time="2025-08-13T00:56:57.324981212Z" level=error msg="Failed to pipe stderr of container \"0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad\"" error="reading from a closed fifo" Aug 13 00:56:57.328768 env[1437]: time="2025-08-13T00:56:57.328715189Z" level=error msg="StartContainer for \"0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Aug 13 00:56:57.329070 kubelet[2368]: E0813 00:56:57.329022 2368 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad" Aug 13 00:56:57.329430 kubelet[2368]: E0813 00:56:57.329229 2368 kuberuntime_manager.go:1341] "Unhandled Error" err=< Aug 13 00:56:57.329430 kubelet[2368]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Aug 13 00:56:57.329430 kubelet[2368]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Aug 13 00:56:57.329430 kubelet[2368]: rm /hostbin/cilium-mount Aug 13 00:56:57.329709 kubelet[2368]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwq2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-q7hgk_kube-system(0bf0432f-ffcd-4ebb-bd14-0072fde6bf28): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Aug 13 00:56:57.329709 kubelet[2368]: > logger="UnhandledError" Aug 13 00:56:57.330919 kubelet[2368]: E0813 00:56:57.330718 2368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-q7hgk" podUID="0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" Aug 13 00:56:57.491069 env[1437]: time="2025-08-13T00:56:57.484796210Z" level=info msg="CreateContainer within sandbox \"8b78d238c2c92d09cc0a946f5676538c54fcfee12de36b17c8da5b10c871b489\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Aug 13 00:56:57.518723 env[1437]: time="2025-08-13T00:56:57.518673810Z" level=info msg="CreateContainer within sandbox \"8b78d238c2c92d09cc0a946f5676538c54fcfee12de36b17c8da5b10c871b489\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"2eed9ac4dc6bb218f1d250e92fd2c27ac7fd23b1be51a658f275c72d29973185\"" Aug 13 00:56:57.519599 env[1437]: time="2025-08-13T00:56:57.519549928Z" level=info msg="StartContainer for \"2eed9ac4dc6bb218f1d250e92fd2c27ac7fd23b1be51a658f275c72d29973185\"" Aug 13 00:56:57.522882 sshd[4098]: Accepted publickey for core from 10.200.16.10 port 40120 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:57.524152 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:57.531198 systemd[1]: Started session-26.scope. Aug 13 00:56:57.532092 systemd-logind[1423]: New session 26 of user core. Aug 13 00:56:57.554202 systemd[1]: Started cri-containerd-2eed9ac4dc6bb218f1d250e92fd2c27ac7fd23b1be51a658f275c72d29973185.scope. Aug 13 00:56:57.562614 systemd[1]: cri-containerd-2eed9ac4dc6bb218f1d250e92fd2c27ac7fd23b1be51a658f275c72d29973185.scope: Deactivated successfully. Aug 13 00:56:57.562907 systemd[1]: Stopped cri-containerd-2eed9ac4dc6bb218f1d250e92fd2c27ac7fd23b1be51a658f275c72d29973185.scope. Aug 13 00:56:57.588873 env[1437]: time="2025-08-13T00:56:57.588820658Z" level=info msg="shim disconnected" id=2eed9ac4dc6bb218f1d250e92fd2c27ac7fd23b1be51a658f275c72d29973185 Aug 13 00:56:57.589093 env[1437]: time="2025-08-13T00:56:57.588877359Z" level=warning msg="cleaning up after shim disconnected" id=2eed9ac4dc6bb218f1d250e92fd2c27ac7fd23b1be51a658f275c72d29973185 namespace=k8s.io Aug 13 00:56:57.589093 env[1437]: time="2025-08-13T00:56:57.588890359Z" level=info msg="cleaning up dead shim" Aug 13 00:56:57.597153 env[1437]: time="2025-08-13T00:56:57.597113429Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4209 runtime=io.containerd.runc.v2\ntime=\"2025-08-13T00:56:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2eed9ac4dc6bb218f1d250e92fd2c27ac7fd23b1be51a658f275c72d29973185/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Aug 13 00:56:57.597460 env[1437]: time="2025-08-13T00:56:57.597399435Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Aug 13 00:56:57.598069 env[1437]: time="2025-08-13T00:56:57.598027748Z" level=error msg="Failed to pipe stderr of container \"2eed9ac4dc6bb218f1d250e92fd2c27ac7fd23b1be51a658f275c72d29973185\"" error="reading from a closed fifo" Aug 13 00:56:57.598154 env[1437]: time="2025-08-13T00:56:57.598088349Z" level=error msg="Failed to pipe stdout of container \"2eed9ac4dc6bb218f1d250e92fd2c27ac7fd23b1be51a658f275c72d29973185\"" error="reading from a closed fifo" Aug 13 00:56:57.602227 env[1437]: time="2025-08-13T00:56:57.602182333Z" level=error msg="StartContainer for \"2eed9ac4dc6bb218f1d250e92fd2c27ac7fd23b1be51a658f275c72d29973185\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Aug 13 00:56:57.602429 kubelet[2368]: E0813 00:56:57.602382 2368 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2eed9ac4dc6bb218f1d250e92fd2c27ac7fd23b1be51a658f275c72d29973185" Aug 13 00:56:57.602571 kubelet[2368]: E0813 00:56:57.602549 2368 kuberuntime_manager.go:1341] "Unhandled Error" err=< Aug 13 00:56:57.602571 kubelet[2368]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Aug 13 00:56:57.602571 kubelet[2368]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Aug 13 00:56:57.602571 kubelet[2368]: rm /hostbin/cilium-mount Aug 13 00:56:57.602571 kubelet[2368]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwq2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-q7hgk_kube-system(0bf0432f-ffcd-4ebb-bd14-0072fde6bf28): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Aug 13 00:56:57.602571 kubelet[2368]: > logger="UnhandledError" Aug 13 00:56:57.604252 kubelet[2368]: E0813 00:56:57.604220 2368 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-q7hgk" podUID="0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" Aug 13 00:56:58.000630 kubelet[2368]: E0813 00:56:58.000586 2368 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-6w6f6" podUID="a909486a-5460-4bab-96ab-cbf67094b754" Aug 13 00:56:58.017827 sshd[4098]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:58.021335 systemd[1]: sshd@23-10.200.4.32:22-10.200.16.10:40120.service: Deactivated successfully. Aug 13 00:56:58.022200 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:56:58.022912 systemd-logind[1423]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:56:58.023761 systemd-logind[1423]: Removed session 26. Aug 13 00:56:58.118584 systemd[1]: Started sshd@24-10.200.4.32:22-10.200.16.10:40126.service. Aug 13 00:56:58.482410 kubelet[2368]: I0813 00:56:58.482373 2368 scope.go:117] "RemoveContainer" containerID="0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad" Aug 13 00:56:58.483579 env[1437]: time="2025-08-13T00:56:58.483528450Z" level=info msg="StopPodSandbox for \"8b78d238c2c92d09cc0a946f5676538c54fcfee12de36b17c8da5b10c871b489\"" Aug 13 00:56:58.484152 env[1437]: time="2025-08-13T00:56:58.484105262Z" level=info msg="Container to stop \"0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:58.484295 env[1437]: time="2025-08-13T00:56:58.484266165Z" level=info msg="Container to stop \"2eed9ac4dc6bb218f1d250e92fd2c27ac7fd23b1be51a658f275c72d29973185\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:58.490468 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b78d238c2c92d09cc0a946f5676538c54fcfee12de36b17c8da5b10c871b489-shm.mount: Deactivated successfully. Aug 13 00:56:58.496160 env[1437]: time="2025-08-13T00:56:58.496130008Z" level=info msg="RemoveContainer for \"0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad\"" Aug 13 00:56:58.500911 systemd[1]: cri-containerd-8b78d238c2c92d09cc0a946f5676538c54fcfee12de36b17c8da5b10c871b489.scope: Deactivated successfully. Aug 13 00:56:58.505494 env[1437]: time="2025-08-13T00:56:58.505459599Z" level=info msg="RemoveContainer for \"0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad\" returns successfully" Aug 13 00:56:58.528771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b78d238c2c92d09cc0a946f5676538c54fcfee12de36b17c8da5b10c871b489-rootfs.mount: Deactivated successfully. Aug 13 00:56:58.540431 env[1437]: time="2025-08-13T00:56:58.540355214Z" level=info msg="shim disconnected" id=8b78d238c2c92d09cc0a946f5676538c54fcfee12de36b17c8da5b10c871b489 Aug 13 00:56:58.540431 env[1437]: time="2025-08-13T00:56:58.540411815Z" level=warning msg="cleaning up after shim disconnected" id=8b78d238c2c92d09cc0a946f5676538c54fcfee12de36b17c8da5b10c871b489 namespace=k8s.io Aug 13 00:56:58.540431 env[1437]: time="2025-08-13T00:56:58.540425116Z" level=info msg="cleaning up dead shim" Aug 13 00:56:58.548172 env[1437]: time="2025-08-13T00:56:58.548135774Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4251 runtime=io.containerd.runc.v2\n" Aug 13 00:56:58.548471 env[1437]: time="2025-08-13T00:56:58.548438380Z" level=info msg="TearDown network for sandbox \"8b78d238c2c92d09cc0a946f5676538c54fcfee12de36b17c8da5b10c871b489\" successfully" Aug 13 00:56:58.548555 env[1437]: time="2025-08-13T00:56:58.548470780Z" level=info msg="StopPodSandbox for \"8b78d238c2c92d09cc0a946f5676538c54fcfee12de36b17c8da5b10c871b489\" returns successfully" Aug 13 00:56:58.583665 kubelet[2368]: I0813 00:56:58.583621 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cni-path\") pod \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " Aug 13 00:56:58.583665 kubelet[2368]: I0813 00:56:58.583665 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-bpf-maps\") pod \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " Aug 13 00:56:58.583934 kubelet[2368]: I0813 00:56:58.583684 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cilium-run\") pod \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " Aug 13 00:56:58.583934 kubelet[2368]: I0813 00:56:58.583703 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cilium-cgroup\") pod \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " Aug 13 00:56:58.583934 kubelet[2368]: I0813 00:56:58.583720 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-hostproc\") pod \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " Aug 13 00:56:58.583934 kubelet[2368]: I0813 00:56:58.583739 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-host-proc-sys-net\") pod \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " Aug 13 00:56:58.583934 kubelet[2368]: I0813 00:56:58.583760 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-host-proc-sys-kernel\") pod \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " Aug 13 00:56:58.583934 kubelet[2368]: I0813 00:56:58.583787 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwq2f\" (UniqueName: \"kubernetes.io/projected/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-kube-api-access-nwq2f\") pod \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " Aug 13 00:56:58.583934 kubelet[2368]: I0813 00:56:58.583809 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-hubble-tls\") pod \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " Aug 13 00:56:58.583934 kubelet[2368]: I0813 00:56:58.583830 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-xtables-lock\") pod \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " Aug 13 00:56:58.583934 kubelet[2368]: I0813 00:56:58.583851 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-etc-cni-netd\") pod \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " Aug 13 00:56:58.583934 kubelet[2368]: I0813 00:56:58.583875 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cilium-ipsec-secrets\") pod \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " Aug 13 00:56:58.583934 kubelet[2368]: I0813 00:56:58.583913 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cilium-config-path\") pod \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " Aug 13 00:56:58.584440 kubelet[2368]: I0813 00:56:58.584021 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" (UID: "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:58.584440 kubelet[2368]: I0813 00:56:58.584070 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cni-path" (OuterVolumeSpecName: "cni-path") pod "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" (UID: "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:58.584440 kubelet[2368]: I0813 00:56:58.584100 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" (UID: "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:58.584440 kubelet[2368]: I0813 00:56:58.584119 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" (UID: "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:58.584440 kubelet[2368]: I0813 00:56:58.584154 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" (UID: "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:58.584440 kubelet[2368]: I0813 00:56:58.584173 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-hostproc" (OuterVolumeSpecName: "hostproc") pod "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" (UID: "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:58.584440 kubelet[2368]: I0813 00:56:58.584193 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" (UID: "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:58.584440 kubelet[2368]: I0813 00:56:58.584226 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" (UID: "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:58.584777 kubelet[2368]: I0813 00:56:58.583947 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-lib-modules\") pod \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " Aug 13 00:56:58.584777 kubelet[2368]: I0813 00:56:58.584661 2368 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-clustermesh-secrets\") pod \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\" (UID: \"0bf0432f-ffcd-4ebb-bd14-0072fde6bf28\") " Aug 13 00:56:58.584777 kubelet[2368]: I0813 00:56:58.584730 2368 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cni-path\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:58.584777 kubelet[2368]: I0813 00:56:58.584747 2368 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-bpf-maps\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:58.584777 kubelet[2368]: I0813 00:56:58.584760 2368 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cilium-run\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:58.585009 kubelet[2368]: I0813 00:56:58.584785 2368 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cilium-cgroup\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:58.585009 kubelet[2368]: I0813 00:56:58.584796 2368 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-hostproc\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:58.585009 kubelet[2368]: I0813 00:56:58.584809 2368 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-host-proc-sys-net\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:58.585009 kubelet[2368]: I0813 00:56:58.584822 2368 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-host-proc-sys-kernel\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:58.585575 kubelet[2368]: I0813 00:56:58.585548 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" (UID: "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:58.589070 kubelet[2368]: I0813 00:56:58.589029 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" (UID: "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:58.592265 kubelet[2368]: I0813 00:56:58.592225 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" (UID: "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:56:58.593658 kubelet[2368]: I0813 00:56:58.593633 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-kube-api-access-nwq2f" (OuterVolumeSpecName: "kube-api-access-nwq2f") pod "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" (UID: "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28"). InnerVolumeSpecName "kube-api-access-nwq2f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:56:58.595309 systemd[1]: var-lib-kubelet-pods-0bf0432f\x2dffcd\x2d4ebb\x2dbd14\x2d0072fde6bf28-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnwq2f.mount: Deactivated successfully. Aug 13 00:56:58.595456 systemd[1]: var-lib-kubelet-pods-0bf0432f\x2dffcd\x2d4ebb\x2dbd14\x2d0072fde6bf28-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:56:58.601792 systemd[1]: var-lib-kubelet-pods-0bf0432f\x2dffcd\x2d4ebb\x2dbd14\x2d0072fde6bf28-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:56:58.603204 kubelet[2368]: I0813 00:56:58.603177 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" (UID: "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:56:58.603503 kubelet[2368]: I0813 00:56:58.603472 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" (UID: "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:56:58.603968 kubelet[2368]: I0813 00:56:58.603924 2368 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" (UID: "0bf0432f-ffcd-4ebb-bd14-0072fde6bf28"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:56:58.685358 kubelet[2368]: I0813 00:56:58.685316 2368 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nwq2f\" (UniqueName: \"kubernetes.io/projected/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-kube-api-access-nwq2f\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:58.685610 kubelet[2368]: I0813 00:56:58.685587 2368 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-hubble-tls\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:58.685719 kubelet[2368]: I0813 00:56:58.685706 2368 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-etc-cni-netd\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:58.685800 kubelet[2368]: I0813 00:56:58.685787 2368 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cilium-ipsec-secrets\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:58.685891 kubelet[2368]: I0813 00:56:58.685868 2368 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-xtables-lock\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:58.685964 kubelet[2368]: I0813 00:56:58.685891 2368 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-cilium-config-path\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:58.685964 kubelet[2368]: I0813 00:56:58.685904 2368 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-lib-modules\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:58.685964 kubelet[2368]: I0813 00:56:58.685916 2368 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28-clustermesh-secrets\") on node \"ci-3510.3.8-a-4e9ab5f8c8\" DevicePath \"\"" Aug 13 00:56:58.712444 sshd[4230]: Accepted publickey for core from 10.200.16.10 port 40126 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:56:58.713872 sshd[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:58.721744 systemd[1]: Started session-27.scope. Aug 13 00:56:58.723032 systemd-logind[1423]: New session 27 of user core. Aug 13 00:56:58.985465 systemd[1]: var-lib-kubelet-pods-0bf0432f\x2dffcd\x2d4ebb\x2dbd14\x2d0072fde6bf28-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 00:56:59.006491 systemd[1]: Removed slice kubepods-burstable-pod0bf0432f_ffcd_4ebb_bd14_0072fde6bf28.slice. Aug 13 00:56:59.485992 kubelet[2368]: I0813 00:56:59.485928 2368 scope.go:117] "RemoveContainer" containerID="2eed9ac4dc6bb218f1d250e92fd2c27ac7fd23b1be51a658f275c72d29973185" Aug 13 00:56:59.490909 env[1437]: time="2025-08-13T00:56:59.490865912Z" level=info msg="RemoveContainer for \"2eed9ac4dc6bb218f1d250e92fd2c27ac7fd23b1be51a658f275c72d29973185\"" Aug 13 00:56:59.500284 env[1437]: time="2025-08-13T00:56:59.500241203Z" level=info msg="RemoveContainer for \"2eed9ac4dc6bb218f1d250e92fd2c27ac7fd23b1be51a658f275c72d29973185\" returns successfully" Aug 13 00:56:59.542138 kubelet[2368]: I0813 00:56:59.542092 2368 memory_manager.go:355] "RemoveStaleState removing state" podUID="0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" containerName="mount-cgroup" Aug 13 00:56:59.542328 kubelet[2368]: I0813 00:56:59.542177 2368 memory_manager.go:355] "RemoveStaleState removing state" podUID="0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" containerName="mount-cgroup" Aug 13 00:56:59.548671 systemd[1]: Created slice kubepods-burstable-pod5600f5ab_5215_4d36_bd51_a5a172c4d6ef.slice. Aug 13 00:56:59.592528 kubelet[2368]: I0813 00:56:59.592446 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5600f5ab-5215-4d36-bd51-a5a172c4d6ef-etc-cni-netd\") pod \"cilium-b7lxc\" (UID: \"5600f5ab-5215-4d36-bd51-a5a172c4d6ef\") " pod="kube-system/cilium-b7lxc" Aug 13 00:56:59.592739 kubelet[2368]: I0813 00:56:59.592564 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5600f5ab-5215-4d36-bd51-a5a172c4d6ef-xtables-lock\") pod \"cilium-b7lxc\" (UID: \"5600f5ab-5215-4d36-bd51-a5a172c4d6ef\") " pod="kube-system/cilium-b7lxc" Aug 13 00:56:59.592739 kubelet[2368]: I0813 00:56:59.592595 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5600f5ab-5215-4d36-bd51-a5a172c4d6ef-host-proc-sys-kernel\") pod \"cilium-b7lxc\" (UID: \"5600f5ab-5215-4d36-bd51-a5a172c4d6ef\") " pod="kube-system/cilium-b7lxc" Aug 13 00:56:59.592739 kubelet[2368]: I0813 00:56:59.592614 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5600f5ab-5215-4d36-bd51-a5a172c4d6ef-bpf-maps\") pod \"cilium-b7lxc\" (UID: \"5600f5ab-5215-4d36-bd51-a5a172c4d6ef\") " pod="kube-system/cilium-b7lxc" Aug 13 00:56:59.592739 kubelet[2368]: I0813 00:56:59.592672 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5600f5ab-5215-4d36-bd51-a5a172c4d6ef-cilium-config-path\") pod \"cilium-b7lxc\" (UID: \"5600f5ab-5215-4d36-bd51-a5a172c4d6ef\") " pod="kube-system/cilium-b7lxc" Aug 13 00:56:59.592739 kubelet[2368]: I0813 00:56:59.592695 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5600f5ab-5215-4d36-bd51-a5a172c4d6ef-hubble-tls\") pod \"cilium-b7lxc\" (UID: \"5600f5ab-5215-4d36-bd51-a5a172c4d6ef\") " pod="kube-system/cilium-b7lxc" Aug 13 00:56:59.593011 kubelet[2368]: I0813 00:56:59.592748 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5600f5ab-5215-4d36-bd51-a5a172c4d6ef-cilium-run\") pod \"cilium-b7lxc\" (UID: \"5600f5ab-5215-4d36-bd51-a5a172c4d6ef\") " pod="kube-system/cilium-b7lxc" Aug 13 00:56:59.593011 kubelet[2368]: I0813 00:56:59.592770 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5600f5ab-5215-4d36-bd51-a5a172c4d6ef-cilium-ipsec-secrets\") pod \"cilium-b7lxc\" (UID: \"5600f5ab-5215-4d36-bd51-a5a172c4d6ef\") " pod="kube-system/cilium-b7lxc" Aug 13 00:56:59.593011 kubelet[2368]: I0813 00:56:59.592818 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5600f5ab-5215-4d36-bd51-a5a172c4d6ef-hostproc\") pod \"cilium-b7lxc\" (UID: \"5600f5ab-5215-4d36-bd51-a5a172c4d6ef\") " pod="kube-system/cilium-b7lxc" Aug 13 00:56:59.593011 kubelet[2368]: I0813 00:56:59.592892 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5600f5ab-5215-4d36-bd51-a5a172c4d6ef-cni-path\") pod \"cilium-b7lxc\" (UID: \"5600f5ab-5215-4d36-bd51-a5a172c4d6ef\") " pod="kube-system/cilium-b7lxc" Aug 13 00:56:59.593011 kubelet[2368]: I0813 00:56:59.592929 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5600f5ab-5215-4d36-bd51-a5a172c4d6ef-clustermesh-secrets\") pod \"cilium-b7lxc\" (UID: \"5600f5ab-5215-4d36-bd51-a5a172c4d6ef\") " pod="kube-system/cilium-b7lxc" Aug 13 00:56:59.593011 kubelet[2368]: I0813 00:56:59.592981 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5600f5ab-5215-4d36-bd51-a5a172c4d6ef-lib-modules\") pod \"cilium-b7lxc\" (UID: \"5600f5ab-5215-4d36-bd51-a5a172c4d6ef\") " pod="kube-system/cilium-b7lxc" Aug 13 00:56:59.593278 kubelet[2368]: I0813 00:56:59.593033 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5600f5ab-5215-4d36-bd51-a5a172c4d6ef-cilium-cgroup\") pod \"cilium-b7lxc\" (UID: \"5600f5ab-5215-4d36-bd51-a5a172c4d6ef\") " pod="kube-system/cilium-b7lxc" Aug 13 00:56:59.593278 kubelet[2368]: I0813 00:56:59.593060 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5600f5ab-5215-4d36-bd51-a5a172c4d6ef-host-proc-sys-net\") pod \"cilium-b7lxc\" (UID: \"5600f5ab-5215-4d36-bd51-a5a172c4d6ef\") " pod="kube-system/cilium-b7lxc" Aug 13 00:56:59.593278 kubelet[2368]: I0813 00:56:59.593108 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhmks\" (UniqueName: \"kubernetes.io/projected/5600f5ab-5215-4d36-bd51-a5a172c4d6ef-kube-api-access-dhmks\") pod \"cilium-b7lxc\" (UID: \"5600f5ab-5215-4d36-bd51-a5a172c4d6ef\") " pod="kube-system/cilium-b7lxc" Aug 13 00:56:59.852236 env[1437]: time="2025-08-13T00:56:59.852178960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b7lxc,Uid:5600f5ab-5215-4d36-bd51-a5a172c4d6ef,Namespace:kube-system,Attempt:0,}" Aug 13 00:56:59.887908 env[1437]: time="2025-08-13T00:56:59.887832385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:56:59.887908 env[1437]: time="2025-08-13T00:56:59.887873586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:56:59.888169 env[1437]: time="2025-08-13T00:56:59.887890286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:56:59.888474 env[1437]: time="2025-08-13T00:56:59.888400396Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ed82837a09d11259771d079354201f12a8497933908cc36b265b582bfc1b6c3 pid=4285 runtime=io.containerd.runc.v2 Aug 13 00:56:59.907785 systemd[1]: Started cri-containerd-6ed82837a09d11259771d079354201f12a8497933908cc36b265b582bfc1b6c3.scope. Aug 13 00:56:59.931616 env[1437]: time="2025-08-13T00:56:59.931580874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b7lxc,Uid:5600f5ab-5215-4d36-bd51-a5a172c4d6ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ed82837a09d11259771d079354201f12a8497933908cc36b265b582bfc1b6c3\"" Aug 13 00:56:59.934481 env[1437]: time="2025-08-13T00:56:59.934443833Z" level=info msg="CreateContainer within sandbox \"6ed82837a09d11259771d079354201f12a8497933908cc36b265b582bfc1b6c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:56:59.958450 env[1437]: time="2025-08-13T00:56:59.958402920Z" level=info msg="CreateContainer within sandbox \"6ed82837a09d11259771d079354201f12a8497933908cc36b265b582bfc1b6c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3d068ca2ba80f9e4a39bf09253cc343da5d857a75822910c07b2a31ba377e177\"" Aug 13 00:56:59.959046 env[1437]: time="2025-08-13T00:56:59.958863729Z" level=info msg="StartContainer for \"3d068ca2ba80f9e4a39bf09253cc343da5d857a75822910c07b2a31ba377e177\"" Aug 13 00:56:59.975899 systemd[1]: Started cri-containerd-3d068ca2ba80f9e4a39bf09253cc343da5d857a75822910c07b2a31ba377e177.scope. Aug 13 00:57:00.001328 kubelet[2368]: E0813 00:57:00.000963 2368 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-6w6f6" podUID="a909486a-5460-4bab-96ab-cbf67094b754" Aug 13 00:57:00.011877 env[1437]: time="2025-08-13T00:57:00.011825005Z" level=info msg="StartContainer for \"3d068ca2ba80f9e4a39bf09253cc343da5d857a75822910c07b2a31ba377e177\" returns successfully" Aug 13 00:57:00.020041 systemd[1]: cri-containerd-3d068ca2ba80f9e4a39bf09253cc343da5d857a75822910c07b2a31ba377e177.scope: Deactivated successfully. Aug 13 00:57:00.039980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d068ca2ba80f9e4a39bf09253cc343da5d857a75822910c07b2a31ba377e177-rootfs.mount: Deactivated successfully. Aug 13 00:57:00.060113 env[1437]: time="2025-08-13T00:57:00.060063778Z" level=info msg="shim disconnected" id=3d068ca2ba80f9e4a39bf09253cc343da5d857a75822910c07b2a31ba377e177 Aug 13 00:57:00.060337 env[1437]: time="2025-08-13T00:57:00.060116079Z" level=warning msg="cleaning up after shim disconnected" id=3d068ca2ba80f9e4a39bf09253cc343da5d857a75822910c07b2a31ba377e177 namespace=k8s.io Aug 13 00:57:00.060337 env[1437]: time="2025-08-13T00:57:00.060131380Z" level=info msg="cleaning up dead shim" Aug 13 00:57:00.070650 env[1437]: time="2025-08-13T00:57:00.070599291Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:57:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4367 runtime=io.containerd.runc.v2\n" Aug 13 00:57:00.421710 kubelet[2368]: W0813 00:57:00.421660 2368 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bf0432f_ffcd_4ebb_bd14_0072fde6bf28.slice/cri-containerd-0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad.scope WatchSource:0}: container "0563daf7448308d46c99046df4de1df8c8e7d6e6322424018e24e6b1d5cfd4ad" in namespace "k8s.io": not found Aug 13 00:57:00.494097 env[1437]: time="2025-08-13T00:57:00.494047639Z" level=info msg="CreateContainer within sandbox \"6ed82837a09d11259771d079354201f12a8497933908cc36b265b582bfc1b6c3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:57:00.535100 env[1437]: time="2025-08-13T00:57:00.535047166Z" level=info msg="CreateContainer within sandbox \"6ed82837a09d11259771d079354201f12a8497933908cc36b265b582bfc1b6c3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fb6d85ff84e8a3d8fb908a5ed316c039a7ab7e0b4da83eb210527b8e1deee660\"" Aug 13 00:57:00.536801 env[1437]: time="2025-08-13T00:57:00.535692179Z" level=info msg="StartContainer for \"fb6d85ff84e8a3d8fb908a5ed316c039a7ab7e0b4da83eb210527b8e1deee660\"" Aug 13 00:57:00.554903 systemd[1]: Started cri-containerd-fb6d85ff84e8a3d8fb908a5ed316c039a7ab7e0b4da83eb210527b8e1deee660.scope. Aug 13 00:57:00.582464 env[1437]: time="2025-08-13T00:57:00.582405622Z" level=info msg="StartContainer for \"fb6d85ff84e8a3d8fb908a5ed316c039a7ab7e0b4da83eb210527b8e1deee660\" returns successfully" Aug 13 00:57:00.588545 systemd[1]: cri-containerd-fb6d85ff84e8a3d8fb908a5ed316c039a7ab7e0b4da83eb210527b8e1deee660.scope: Deactivated successfully. Aug 13 00:57:00.628264 env[1437]: time="2025-08-13T00:57:00.628209247Z" level=info msg="shim disconnected" id=fb6d85ff84e8a3d8fb908a5ed316c039a7ab7e0b4da83eb210527b8e1deee660 Aug 13 00:57:00.628264 env[1437]: time="2025-08-13T00:57:00.628265448Z" level=warning msg="cleaning up after shim disconnected" id=fb6d85ff84e8a3d8fb908a5ed316c039a7ab7e0b4da83eb210527b8e1deee660 namespace=k8s.io Aug 13 00:57:00.628576 env[1437]: time="2025-08-13T00:57:00.628276648Z" level=info msg="cleaning up dead shim" Aug 13 00:57:00.637428 env[1437]: time="2025-08-13T00:57:00.637389532Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:57:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4430 runtime=io.containerd.runc.v2\n" Aug 13 00:57:00.987716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb6d85ff84e8a3d8fb908a5ed316c039a7ab7e0b4da83eb210527b8e1deee660-rootfs.mount: Deactivated successfully. Aug 13 00:57:01.002745 kubelet[2368]: I0813 00:57:01.002699 2368 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bf0432f-ffcd-4ebb-bd14-0072fde6bf28" path="/var/lib/kubelet/pods/0bf0432f-ffcd-4ebb-bd14-0072fde6bf28/volumes" Aug 13 00:57:01.141901 kubelet[2368]: E0813 00:57:01.141850 2368 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:57:01.501910 env[1437]: time="2025-08-13T00:57:01.501859009Z" level=info msg="CreateContainer within sandbox \"6ed82837a09d11259771d079354201f12a8497933908cc36b265b582bfc1b6c3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:57:01.538204 env[1437]: time="2025-08-13T00:57:01.538022134Z" level=info msg="CreateContainer within sandbox \"6ed82837a09d11259771d079354201f12a8497933908cc36b265b582bfc1b6c3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d994b84807fb25464b6d51dd1c58caf512d65d008c9399731dc772493cb516cf\"" Aug 13 00:57:01.538826 env[1437]: time="2025-08-13T00:57:01.538781749Z" level=info msg="StartContainer for \"d994b84807fb25464b6d51dd1c58caf512d65d008c9399731dc772493cb516cf\"" Aug 13 00:57:01.580090 systemd[1]: Started cri-containerd-d994b84807fb25464b6d51dd1c58caf512d65d008c9399731dc772493cb516cf.scope. Aug 13 00:57:01.620574 systemd[1]: cri-containerd-d994b84807fb25464b6d51dd1c58caf512d65d008c9399731dc772493cb516cf.scope: Deactivated successfully. Aug 13 00:57:01.622009 env[1437]: time="2025-08-13T00:57:01.621960916Z" level=info msg="StartContainer for \"d994b84807fb25464b6d51dd1c58caf512d65d008c9399731dc772493cb516cf\" returns successfully" Aug 13 00:57:01.651522 env[1437]: time="2025-08-13T00:57:01.651470707Z" level=info msg="shim disconnected" id=d994b84807fb25464b6d51dd1c58caf512d65d008c9399731dc772493cb516cf Aug 13 00:57:01.651522 env[1437]: time="2025-08-13T00:57:01.651519508Z" level=warning msg="cleaning up after shim disconnected" id=d994b84807fb25464b6d51dd1c58caf512d65d008c9399731dc772493cb516cf namespace=k8s.io Aug 13 00:57:01.651835 env[1437]: time="2025-08-13T00:57:01.651531809Z" level=info msg="cleaning up dead shim" Aug 13 00:57:01.659341 env[1437]: time="2025-08-13T00:57:01.659283964Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:57:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4487 runtime=io.containerd.runc.v2\n" Aug 13 00:57:01.987934 systemd[1]: run-containerd-runc-k8s.io-d994b84807fb25464b6d51dd1c58caf512d65d008c9399731dc772493cb516cf-runc.4Yi7pJ.mount: Deactivated successfully. Aug 13 00:57:01.988069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d994b84807fb25464b6d51dd1c58caf512d65d008c9399731dc772493cb516cf-rootfs.mount: Deactivated successfully. Aug 13 00:57:02.001189 kubelet[2368]: E0813 00:57:02.001138 2368 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-6w6f6" podUID="a909486a-5460-4bab-96ab-cbf67094b754" Aug 13 00:57:02.508176 env[1437]: time="2025-08-13T00:57:02.508116801Z" level=info msg="CreateContainer within sandbox \"6ed82837a09d11259771d079354201f12a8497933908cc36b265b582bfc1b6c3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:57:02.550113 env[1437]: time="2025-08-13T00:57:02.550056636Z" level=info msg="CreateContainer within sandbox \"6ed82837a09d11259771d079354201f12a8497933908cc36b265b582bfc1b6c3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5230d14014f9b6f189e6fe7675b1c9cca5ecff4bd2c5fca6e077cd3d1056abc2\"" Aug 13 00:57:02.551659 env[1437]: time="2025-08-13T00:57:02.550647747Z" level=info msg="StartContainer for \"5230d14014f9b6f189e6fe7675b1c9cca5ecff4bd2c5fca6e077cd3d1056abc2\"" Aug 13 00:57:02.577605 systemd[1]: Started cri-containerd-5230d14014f9b6f189e6fe7675b1c9cca5ecff4bd2c5fca6e077cd3d1056abc2.scope. Aug 13 00:57:02.607908 systemd[1]: cri-containerd-5230d14014f9b6f189e6fe7675b1c9cca5ecff4bd2c5fca6e077cd3d1056abc2.scope: Deactivated successfully. Aug 13 00:57:02.610052 env[1437]: time="2025-08-13T00:57:02.609622021Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5600f5ab_5215_4d36_bd51_a5a172c4d6ef.slice/cri-containerd-5230d14014f9b6f189e6fe7675b1c9cca5ecff4bd2c5fca6e077cd3d1056abc2.scope/memory.events\": no such file or directory" Aug 13 00:57:02.614998 env[1437]: time="2025-08-13T00:57:02.614952527Z" level=info msg="StartContainer for \"5230d14014f9b6f189e6fe7675b1c9cca5ecff4bd2c5fca6e077cd3d1056abc2\" returns successfully" Aug 13 00:57:02.644209 env[1437]: time="2025-08-13T00:57:02.644133007Z" level=info msg="shim disconnected" id=5230d14014f9b6f189e6fe7675b1c9cca5ecff4bd2c5fca6e077cd3d1056abc2 Aug 13 00:57:02.644209 env[1437]: time="2025-08-13T00:57:02.644187908Z" level=warning msg="cleaning up after shim disconnected" id=5230d14014f9b6f189e6fe7675b1c9cca5ecff4bd2c5fca6e077cd3d1056abc2 namespace=k8s.io Aug 13 00:57:02.644209 env[1437]: time="2025-08-13T00:57:02.644200909Z" level=info msg="cleaning up dead shim" Aug 13 00:57:02.652319 env[1437]: time="2025-08-13T00:57:02.652273969Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:57:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4540 runtime=io.containerd.runc.v2\n" Aug 13 00:57:02.988315 systemd[1]: run-containerd-runc-k8s.io-5230d14014f9b6f189e6fe7675b1c9cca5ecff4bd2c5fca6e077cd3d1056abc2-runc.e1mR1q.mount: Deactivated successfully. Aug 13 00:57:02.988462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5230d14014f9b6f189e6fe7675b1c9cca5ecff4bd2c5fca6e077cd3d1056abc2-rootfs.mount: Deactivated successfully. Aug 13 00:57:03.511510 env[1437]: time="2025-08-13T00:57:03.511448190Z" level=info msg="CreateContainer within sandbox \"6ed82837a09d11259771d079354201f12a8497933908cc36b265b582bfc1b6c3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:57:03.540058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1672880291.mount: Deactivated successfully. Aug 13 00:57:03.546525 kubelet[2368]: W0813 00:57:03.544652 2368 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5600f5ab_5215_4d36_bd51_a5a172c4d6ef.slice/cri-containerd-3d068ca2ba80f9e4a39bf09253cc343da5d857a75822910c07b2a31ba377e177.scope WatchSource:0}: task 3d068ca2ba80f9e4a39bf09253cc343da5d857a75822910c07b2a31ba377e177 not found: not found Aug 13 00:57:03.556237 env[1437]: time="2025-08-13T00:57:03.556194774Z" level=info msg="CreateContainer within sandbox \"6ed82837a09d11259771d079354201f12a8497933908cc36b265b582bfc1b6c3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4cdaa41adf10027c0181e0f5984beaa8ce2b4d6dbe94d1e947291fc0c08db067\"" Aug 13 00:57:03.556882 env[1437]: time="2025-08-13T00:57:03.556849287Z" level=info msg="StartContainer for \"4cdaa41adf10027c0181e0f5984beaa8ce2b4d6dbe94d1e947291fc0c08db067\"" Aug 13 00:57:03.578903 systemd[1]: Started cri-containerd-4cdaa41adf10027c0181e0f5984beaa8ce2b4d6dbe94d1e947291fc0c08db067.scope. Aug 13 00:57:03.618896 env[1437]: time="2025-08-13T00:57:03.618844012Z" level=info msg="StartContainer for \"4cdaa41adf10027c0181e0f5984beaa8ce2b4d6dbe94d1e947291fc0c08db067\" returns successfully" Aug 13 00:57:04.001234 kubelet[2368]: E0813 00:57:04.001180 2368 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-6w6f6" podUID="a909486a-5460-4bab-96ab-cbf67094b754" Aug 13 00:57:04.158966 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 00:57:05.204542 systemd[1]: run-containerd-runc-k8s.io-4cdaa41adf10027c0181e0f5984beaa8ce2b4d6dbe94d1e947291fc0c08db067-runc.wB7oXn.mount: Deactivated successfully. Aug 13 00:57:05.325713 kubelet[2368]: I0813 00:57:05.325650 2368 setters.go:602] "Node became not ready" node="ci-3510.3.8-a-4e9ab5f8c8" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:57:05Z","lastTransitionTime":"2025-08-13T00:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:57:06.001338 kubelet[2368]: E0813 00:57:06.001280 2368 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-6w6f6" podUID="a909486a-5460-4bab-96ab-cbf67094b754" Aug 13 00:57:06.659245 kubelet[2368]: W0813 00:57:06.659203 2368 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5600f5ab_5215_4d36_bd51_a5a172c4d6ef.slice/cri-containerd-fb6d85ff84e8a3d8fb908a5ed316c039a7ab7e0b4da83eb210527b8e1deee660.scope WatchSource:0}: task fb6d85ff84e8a3d8fb908a5ed316c039a7ab7e0b4da83eb210527b8e1deee660 not found: not found Aug 13 00:57:06.923457 systemd-networkd[1594]: lxc_health: Link UP Aug 13 00:57:06.937662 systemd-networkd[1594]: lxc_health: Gained carrier Aug 13 00:57:06.938187 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:57:07.883292 kubelet[2368]: I0813 00:57:07.883220 2368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b7lxc" podStartSLOduration=8.883201636999999 podStartE2EDuration="8.883201637s" podCreationTimestamp="2025-08-13 00:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:57:04.531616168 +0000 UTC m=+204.146583334" watchObservedRunningTime="2025-08-13 00:57:07.883201637 +0000 UTC m=+207.498168903" Aug 13 00:57:08.898135 systemd-networkd[1594]: lxc_health: Gained IPv6LL Aug 13 00:57:09.543339 systemd[1]: run-containerd-runc-k8s.io-4cdaa41adf10027c0181e0f5984beaa8ce2b4d6dbe94d1e947291fc0c08db067-runc.DVcRV6.mount: Deactivated successfully. Aug 13 00:57:09.768338 kubelet[2368]: W0813 00:57:09.768275 2368 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5600f5ab_5215_4d36_bd51_a5a172c4d6ef.slice/cri-containerd-d994b84807fb25464b6d51dd1c58caf512d65d008c9399731dc772493cb516cf.scope WatchSource:0}: task d994b84807fb25464b6d51dd1c58caf512d65d008c9399731dc772493cb516cf not found: not found Aug 13 00:57:11.670524 systemd[1]: run-containerd-runc-k8s.io-4cdaa41adf10027c0181e0f5984beaa8ce2b4d6dbe94d1e947291fc0c08db067-runc.QokN6L.mount: Deactivated successfully. Aug 13 00:57:12.877427 kubelet[2368]: W0813 00:57:12.877369 2368 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5600f5ab_5215_4d36_bd51_a5a172c4d6ef.slice/cri-containerd-5230d14014f9b6f189e6fe7675b1c9cca5ecff4bd2c5fca6e077cd3d1056abc2.scope WatchSource:0}: task 5230d14014f9b6f189e6fe7675b1c9cca5ecff4bd2c5fca6e077cd3d1056abc2 not found: not found Aug 13 00:57:13.894503 systemd[1]: run-containerd-runc-k8s.io-4cdaa41adf10027c0181e0f5984beaa8ce2b4d6dbe94d1e947291fc0c08db067-runc.7p9FkN.mount: Deactivated successfully. Aug 13 00:57:14.044469 sshd[4230]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:14.048072 systemd[1]: sshd@24-10.200.4.32:22-10.200.16.10:40126.service: Deactivated successfully. Aug 13 00:57:14.049147 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:57:14.049976 systemd-logind[1423]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:57:14.051144 systemd-logind[1423]: Removed session 27.