Aug 13 00:48:51.018049 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 00:48:51.018079 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:48:51.018093 kernel: BIOS-provided physical RAM map: Aug 13 00:48:51.018103 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:48:51.018113 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Aug 13 00:48:51.018122 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Aug 13 00:48:51.018137 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Aug 13 00:48:51.018148 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Aug 13 00:48:51.018158 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Aug 13 00:48:51.018168 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Aug 13 00:48:51.018179 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Aug 13 00:48:51.018189 kernel: printk: bootconsole [earlyser0] enabled Aug 13 00:48:51.018199 kernel: NX (Execute Disable) protection: active Aug 13 00:48:51.018209 kernel: efi: EFI v2.70 by Microsoft Aug 13 00:48:51.018225 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c7a98 RNG=0x3ffd1018 Aug 13 00:48:51.018236 kernel: random: crng init done Aug 13 00:48:51.018246 kernel: SMBIOS 3.1.0 present. Aug 13 00:48:51.018257 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Aug 13 00:48:51.018268 kernel: Hypervisor detected: Microsoft Hyper-V Aug 13 00:48:51.018280 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Aug 13 00:48:51.018291 kernel: Hyper-V Host Build:20348-10.0-1-0.1827 Aug 13 00:48:51.018301 kernel: Hyper-V: Nested features: 0x1e0101 Aug 13 00:48:51.018315 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Aug 13 00:48:51.018326 kernel: Hyper-V: Using hypercall for remote TLB flush Aug 13 00:48:51.018337 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 13 00:48:51.018355 kernel: tsc: Marking TSC unstable due to running on Hyper-V Aug 13 00:48:51.018380 kernel: tsc: Detected 2593.906 MHz processor Aug 13 00:48:51.018393 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:48:51.018404 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:48:51.018414 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Aug 13 00:48:51.018425 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:48:51.018436 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Aug 13 00:48:51.018451 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Aug 13 00:48:51.018461 kernel: Using GB pages for direct mapping Aug 13 00:48:51.018470 kernel: Secure boot disabled Aug 13 00:48:51.018481 kernel: ACPI: Early table checksum verification disabled Aug 13 00:48:51.018492 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Aug 13 00:48:51.018504 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:48:51.018515 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:48:51.018526 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Aug 13 00:48:51.018544 kernel: ACPI: FACS 0x000000003FFFE000 000040 Aug 13 00:48:51.018555 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:48:51.018573 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:48:51.018586 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:48:51.018597 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:48:51.018610 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:48:51.018626 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:48:51.018638 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:48:51.018650 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Aug 13 00:48:51.018664 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Aug 13 00:48:51.018677 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Aug 13 00:48:51.018688 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Aug 13 00:48:51.018700 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Aug 13 00:48:51.018726 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Aug 13 00:48:51.018741 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Aug 13 00:48:51.018754 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Aug 13 00:48:51.018767 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Aug 13 00:48:51.018778 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Aug 13 00:48:51.018790 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 00:48:51.018801 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 00:48:51.018813 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Aug 13 00:48:51.018825 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Aug 13 00:48:51.018837 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Aug 13 00:48:51.018853 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Aug 13 00:48:51.018865 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Aug 13 00:48:51.018881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Aug 13 00:48:51.018893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Aug 13 00:48:51.018905 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Aug 13 00:48:51.018917 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Aug 13 00:48:51.018937 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Aug 13 00:48:51.018950 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Aug 13 00:48:51.018961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Aug 13 00:48:51.018975 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Aug 13 00:48:51.018987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Aug 13 00:48:51.019000 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Aug 13 00:48:51.019012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Aug 13 00:48:51.019025 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Aug 13 00:48:51.019038 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Aug 13 00:48:51.019051 kernel: Zone ranges: Aug 13 00:48:51.019064 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:48:51.019076 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 00:48:51.019089 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 00:48:51.019101 kernel: Movable zone start for each node Aug 13 00:48:51.019113 kernel: Early memory node ranges Aug 13 00:48:51.019126 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 00:48:51.019139 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Aug 13 00:48:51.019150 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Aug 13 00:48:51.019161 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 00:48:51.019173 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Aug 13 00:48:51.019186 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:48:51.019201 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 00:48:51.019212 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Aug 13 00:48:51.019224 kernel: ACPI: PM-Timer IO Port: 0x408 Aug 13 00:48:51.019236 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Aug 13 00:48:51.019249 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:48:51.019262 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:48:51.019273 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:48:51.019285 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Aug 13 00:48:51.019297 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 00:48:51.019314 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Aug 13 00:48:51.019328 kernel: Booting paravirtualized kernel on Hyper-V Aug 13 00:48:51.019342 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:48:51.019356 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:48:51.019370 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Aug 13 00:48:51.019384 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Aug 13 00:48:51.019397 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:48:51.019410 kernel: Hyper-V: PV spinlocks enabled Aug 13 00:48:51.019422 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:48:51.019438 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Aug 13 00:48:51.019451 kernel: Policy zone: Normal Aug 13 00:48:51.019466 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:48:51.019480 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:48:51.019493 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 00:48:51.019506 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:48:51.019519 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:48:51.019532 kernel: Memory: 8079144K/8387460K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 308056K reserved, 0K cma-reserved) Aug 13 00:48:51.019549 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:48:51.019562 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 00:48:51.019586 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 00:48:51.019603 kernel: rcu: Hierarchical RCU implementation. Aug 13 00:48:51.019619 kernel: rcu: RCU event tracing is enabled. Aug 13 00:48:51.019633 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:48:51.019647 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:48:51.019660 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:48:51.019674 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:48:51.019688 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:48:51.019701 kernel: Using NULL legacy PIC Aug 13 00:48:51.019730 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Aug 13 00:48:51.019744 kernel: Console: colour dummy device 80x25 Aug 13 00:48:51.019758 kernel: printk: console [tty1] enabled Aug 13 00:48:51.019772 kernel: printk: console [ttyS0] enabled Aug 13 00:48:51.019786 kernel: printk: bootconsole [earlyser0] disabled Aug 13 00:48:51.019803 kernel: ACPI: Core revision 20210730 Aug 13 00:48:51.019816 kernel: Failed to register legacy timer interrupt Aug 13 00:48:51.019830 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:48:51.019844 kernel: Hyper-V: Using IPI hypercalls Aug 13 00:48:51.019858 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Aug 13 00:48:51.019872 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 00:48:51.019886 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 00:48:51.019900 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:48:51.019913 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:48:51.019927 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:48:51.019944 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 13 00:48:51.019958 kernel: RETBleed: Vulnerable Aug 13 00:48:51.019971 kernel: Speculative Store Bypass: Vulnerable Aug 13 00:48:51.019985 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:48:51.019999 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:48:51.020012 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:48:51.020026 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:48:51.020039 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:48:51.020053 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:48:51.020067 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 13 00:48:51.020083 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 13 00:48:51.020097 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 13 00:48:51.020110 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:48:51.020123 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Aug 13 00:48:51.020137 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Aug 13 00:48:51.020150 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Aug 13 00:48:51.020164 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Aug 13 00:48:51.020177 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:48:51.020190 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:48:51.020204 kernel: LSM: Security Framework initializing Aug 13 00:48:51.020217 kernel: SELinux: Initializing. Aug 13 00:48:51.020231 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:48:51.020248 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:48:51.020262 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 13 00:48:51.020276 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 13 00:48:51.020289 kernel: signal: max sigframe size: 3632 Aug 13 00:48:51.020304 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:48:51.020317 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 00:48:51.020331 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:48:51.020345 kernel: x86: Booting SMP configuration: Aug 13 00:48:51.020358 kernel: .... node #0, CPUs: #1 Aug 13 00:48:51.020373 kernel: Transient Scheduler Attacks: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Aug 13 00:48:51.020390 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 00:48:51.020405 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:48:51.020418 kernel: smpboot: Max logical packages: 1 Aug 13 00:48:51.020431 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Aug 13 00:48:51.020444 kernel: devtmpfs: initialized Aug 13 00:48:51.020458 kernel: x86/mm: Memory block size: 128MB Aug 13 00:48:51.020471 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Aug 13 00:48:51.020486 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:48:51.020503 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:48:51.020515 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:48:51.020528 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:48:51.020541 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:48:51.020553 kernel: audit: type=2000 audit(1755046129.023:1): state=initialized audit_enabled=0 res=1 Aug 13 00:48:51.020566 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:48:51.020579 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:48:51.020593 kernel: cpuidle: using governor menu Aug 13 00:48:51.020606 kernel: ACPI: bus type PCI registered Aug 13 00:48:51.020621 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:48:51.020634 kernel: dca service started, version 1.12.1 Aug 13 00:48:51.020648 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:48:51.020661 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:48:51.020675 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:48:51.020689 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:48:51.020716 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:48:51.020731 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:48:51.020745 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:48:51.020762 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:48:51.020777 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:48:51.020791 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:48:51.020807 kernel: ACPI: Interpreter enabled Aug 13 00:48:51.020821 kernel: ACPI: PM: (supports S0 S5) Aug 13 00:48:51.020835 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:48:51.020851 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:48:51.020866 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Aug 13 00:48:51.020880 kernel: iommu: Default domain type: Translated Aug 13 00:48:51.020898 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:48:51.020913 kernel: vgaarb: loaded Aug 13 00:48:51.020926 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:48:51.020941 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:48:51.020954 kernel: PTP clock support registered Aug 13 00:48:51.020966 kernel: Registered efivars operations Aug 13 00:48:51.020978 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:48:51.020990 kernel: PCI: System does not support PCI Aug 13 00:48:51.021002 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Aug 13 00:48:51.021017 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:48:51.021029 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:48:51.021042 kernel: pnp: PnP ACPI init Aug 13 00:48:51.021055 kernel: pnp: PnP ACPI: found 3 devices Aug 13 00:48:51.021068 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:48:51.021082 kernel: NET: Registered PF_INET protocol family Aug 13 00:48:51.021094 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 00:48:51.021107 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 00:48:51.021119 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:48:51.021134 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:48:51.021147 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Aug 13 00:48:51.021159 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 00:48:51.021171 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 00:48:51.021184 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 00:48:51.021196 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:48:51.021209 kernel: NET: Registered PF_XDP protocol family Aug 13 00:48:51.021221 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:48:51.021234 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 00:48:51.021249 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Aug 13 00:48:51.021262 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 00:48:51.021275 kernel: Initialise system trusted keyrings Aug 13 00:48:51.021287 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 00:48:51.021301 kernel: Key type asymmetric registered Aug 13 00:48:51.021313 kernel: Asymmetric key parser 'x509' registered Aug 13 00:48:51.021326 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:48:51.021338 kernel: io scheduler mq-deadline registered Aug 13 00:48:51.021351 kernel: io scheduler kyber registered Aug 13 00:48:51.021365 kernel: io scheduler bfq registered Aug 13 00:48:51.021378 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:48:51.021390 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:48:51.021404 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:48:51.021416 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 00:48:51.021430 kernel: i8042: PNP: No PS/2 controller found. Aug 13 00:48:51.021594 kernel: rtc_cmos 00:02: registered as rtc0 Aug 13 00:48:51.021724 kernel: rtc_cmos 00:02: setting system clock to 2025-08-13T00:48:50 UTC (1755046130) Aug 13 00:48:51.021842 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Aug 13 00:48:51.021859 kernel: intel_pstate: CPU model not supported Aug 13 00:48:51.021873 kernel: efifb: probing for efifb Aug 13 00:48:51.021887 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 00:48:51.021900 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 00:48:51.021913 kernel: efifb: scrolling: redraw Aug 13 00:48:51.021926 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:48:51.021940 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:48:51.021957 kernel: fb0: EFI VGA frame buffer device Aug 13 00:48:51.021971 kernel: pstore: Registered efi as persistent store backend Aug 13 00:48:51.021983 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:48:51.021996 kernel: Segment Routing with IPv6 Aug 13 00:48:51.022010 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:48:51.022022 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:48:51.022035 kernel: Key type dns_resolver registered Aug 13 00:48:51.022048 kernel: IPI shorthand broadcast: enabled Aug 13 00:48:51.022062 kernel: sched_clock: Marking stable (739517300, 22484000)->(953405800, -191404500) Aug 13 00:48:51.022075 kernel: registered taskstats version 1 Aug 13 00:48:51.022091 kernel: Loading compiled-in X.509 certificates Aug 13 00:48:51.022105 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 00:48:51.022118 kernel: Key type .fscrypt registered Aug 13 00:48:51.022131 kernel: Key type fscrypt-provisioning registered Aug 13 00:48:51.022145 kernel: pstore: Using crash dump compression: deflate Aug 13 00:48:51.022158 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:48:51.022171 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:48:51.022184 kernel: ima: No architecture policies found Aug 13 00:48:51.022200 kernel: clk: Disabling unused clocks Aug 13 00:48:51.022212 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 00:48:51.022225 kernel: Write protecting the kernel read-only data: 28672k Aug 13 00:48:51.022238 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 00:48:51.022251 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 00:48:51.022264 kernel: Run /init as init process Aug 13 00:48:51.022276 kernel: with arguments: Aug 13 00:48:51.022290 kernel: /init Aug 13 00:48:51.022303 kernel: with environment: Aug 13 00:48:51.022320 kernel: HOME=/ Aug 13 00:48:51.022334 kernel: TERM=linux Aug 13 00:48:51.022346 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:48:51.022362 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:48:51.022378 systemd[1]: Detected virtualization microsoft. Aug 13 00:48:51.022392 systemd[1]: Detected architecture x86-64. Aug 13 00:48:51.022405 systemd[1]: Running in initrd. Aug 13 00:48:51.022418 systemd[1]: No hostname configured, using default hostname. Aug 13 00:48:51.022434 systemd[1]: Hostname set to . Aug 13 00:48:51.022448 systemd[1]: Initializing machine ID from random generator. Aug 13 00:48:51.022461 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:48:51.022475 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:48:51.022488 systemd[1]: Reached target cryptsetup.target. Aug 13 00:48:51.022502 systemd[1]: Reached target paths.target. Aug 13 00:48:51.022515 systemd[1]: Reached target slices.target. Aug 13 00:48:51.022529 systemd[1]: Reached target swap.target. Aug 13 00:48:51.022544 systemd[1]: Reached target timers.target. Aug 13 00:48:51.022559 systemd[1]: Listening on iscsid.socket. Aug 13 00:48:51.022573 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:48:51.022587 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:48:51.022600 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:48:51.022615 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:48:51.022628 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:48:51.022642 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:48:51.022659 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:48:51.022673 systemd[1]: Reached target sockets.target. Aug 13 00:48:51.022687 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:48:51.022701 systemd[1]: Finished network-cleanup.service. Aug 13 00:48:51.022727 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:48:51.022740 systemd[1]: Starting systemd-journald.service... Aug 13 00:48:51.022754 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:48:51.022767 systemd[1]: Starting systemd-resolved.service... Aug 13 00:48:51.022781 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:48:51.022802 systemd-journald[183]: Journal started Aug 13 00:48:51.022867 systemd-journald[183]: Runtime Journal (/run/log/journal/97d734f2c1a74851bec707c74950ee88) is 8.0M, max 159.0M, 151.0M free. Aug 13 00:48:51.036719 systemd[1]: Started systemd-journald.service. Aug 13 00:48:51.024245 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 00:48:51.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.056298 kernel: audit: type=1130 audit(1755046131.042:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.043225 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:48:51.056484 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:48:51.060649 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:48:51.065443 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:48:51.074275 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:48:51.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.094756 kernel: audit: type=1130 audit(1755046131.055:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.085342 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:48:51.105977 systemd-resolved[185]: Positive Trust Anchors: Aug 13 00:48:51.114121 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:48:51.118840 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:48:51.114220 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:48:51.118995 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:48:51.135410 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:48:51.211637 kernel: audit: type=1130 audit(1755046131.059:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.211668 kernel: Bridge firewalling registered Aug 13 00:48:51.211684 kernel: audit: type=1130 audit(1755046131.063:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.211701 kernel: audit: type=1130 audit(1755046131.096:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.211728 kernel: audit: type=1130 audit(1755046131.133:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.211751 kernel: audit: type=1130 audit(1755046131.169:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.140884 systemd-resolved[185]: Defaulting to hostname 'linux'. Aug 13 00:48:51.214213 dracut-cmdline[200]: dracut-dracut-053 Aug 13 00:48:51.214213 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Aug 13 00:48:51.214213 dracut-cmdline[200]: BEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:48:51.233212 kernel: SCSI subsystem initialized Aug 13 00:48:51.168501 systemd[1]: Started systemd-resolved.service. Aug 13 00:48:51.241743 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:48:51.169376 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 00:48:51.170466 systemd[1]: Reached target nss-lookup.target. Aug 13 00:48:51.253580 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:48:51.253604 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:48:51.257128 systemd-modules-load[184]: Inserted module 'dm_multipath' Aug 13 00:48:51.258998 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:48:51.275591 kernel: audit: type=1130 audit(1755046131.260:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.262135 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:48:51.285125 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:48:51.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.302728 kernel: audit: type=1130 audit(1755046131.286:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.335724 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:48:51.354729 kernel: iscsi: registered transport (tcp) Aug 13 00:48:51.381338 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:48:51.381413 kernel: QLogic iSCSI HBA Driver Aug 13 00:48:51.410882 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:48:51.413760 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:48:51.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.468733 kernel: raid6: avx512x4 gen() 18493 MB/s Aug 13 00:48:51.484723 kernel: raid6: avx512x4 xor() 7882 MB/s Aug 13 00:48:51.504717 kernel: raid6: avx512x2 gen() 18470 MB/s Aug 13 00:48:51.524725 kernel: raid6: avx512x2 xor() 30049 MB/s Aug 13 00:48:51.544718 kernel: raid6: avx512x1 gen() 18468 MB/s Aug 13 00:48:51.563732 kernel: raid6: avx512x1 xor() 26972 MB/s Aug 13 00:48:51.583720 kernel: raid6: avx2x4 gen() 18367 MB/s Aug 13 00:48:51.603717 kernel: raid6: avx2x4 xor() 7617 MB/s Aug 13 00:48:51.623718 kernel: raid6: avx2x2 gen() 18413 MB/s Aug 13 00:48:51.643721 kernel: raid6: avx2x2 xor() 22308 MB/s Aug 13 00:48:51.663718 kernel: raid6: avx2x1 gen() 14116 MB/s Aug 13 00:48:51.682718 kernel: raid6: avx2x1 xor() 19442 MB/s Aug 13 00:48:51.702718 kernel: raid6: sse2x4 gen() 11777 MB/s Aug 13 00:48:51.721718 kernel: raid6: sse2x4 xor() 7458 MB/s Aug 13 00:48:51.741717 kernel: raid6: sse2x2 gen() 12974 MB/s Aug 13 00:48:51.761721 kernel: raid6: sse2x2 xor() 7529 MB/s Aug 13 00:48:51.781718 kernel: raid6: sse2x1 gen() 11690 MB/s Aug 13 00:48:51.804798 kernel: raid6: sse2x1 xor() 5956 MB/s Aug 13 00:48:51.804815 kernel: raid6: using algorithm avx512x4 gen() 18493 MB/s Aug 13 00:48:51.804827 kernel: raid6: .... xor() 7882 MB/s, rmw enabled Aug 13 00:48:51.808105 kernel: raid6: using avx512x2 recovery algorithm Aug 13 00:48:51.826728 kernel: xor: automatically using best checksumming function avx Aug 13 00:48:51.922740 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 00:48:51.931036 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:48:51.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.935000 audit: BPF prog-id=7 op=LOAD Aug 13 00:48:51.935000 audit: BPF prog-id=8 op=LOAD Aug 13 00:48:51.936289 systemd[1]: Starting systemd-udevd.service... Aug 13 00:48:51.950112 systemd-udevd[383]: Using default interface naming scheme 'v252'. Aug 13 00:48:51.954813 systemd[1]: Started systemd-udevd.service. Aug 13 00:48:51.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:51.964295 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:48:51.979587 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Aug 13 00:48:52.009640 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:48:52.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:52.012611 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:48:52.049107 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:48:52.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:52.099723 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:48:52.128722 kernel: hv_vmbus: Vmbus version:5.2 Aug 13 00:48:52.132720 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:48:52.136719 kernel: AES CTR mode by8 optimization enabled Aug 13 00:48:52.148719 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 00:48:52.153723 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 00:48:52.173215 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:48:52.173257 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 13 00:48:52.176136 kernel: scsi host1: storvsc_host_t Aug 13 00:48:52.178732 kernel: scsi host0: storvsc_host_t Aug 13 00:48:52.178777 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 00:48:52.186687 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 13 00:48:52.193720 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 13 00:48:52.205726 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 00:48:52.205756 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 13 00:48:52.215879 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 00:48:52.230663 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 00:48:52.256907 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:48:52.256930 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 13 00:48:52.267023 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 00:48:52.267215 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:48:52.267403 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 13 00:48:52.267580 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 13 00:48:52.267766 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 00:48:52.267945 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:48:52.267965 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:48:52.342775 kernel: hv_netvsc 7ced8d6c-64a8-7ced-8d6c-64a87ced8d6c eth0: VF slot 1 added Aug 13 00:48:52.357617 kernel: hv_vmbus: registering driver hv_pci Aug 13 00:48:52.357666 kernel: hv_pci 8495ea95-da07-46e3-940b-63180552c2f7: PCI VMBus probing: Using version 0x10004 Aug 13 00:48:52.410691 kernel: hv_pci 8495ea95-da07-46e3-940b-63180552c2f7: PCI host bridge to bus da07:00 Aug 13 00:48:52.410889 kernel: pci_bus da07:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Aug 13 00:48:52.411071 kernel: pci_bus da07:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 00:48:52.411229 kernel: pci da07:00:02.0: [15b3:1016] type 00 class 0x020000 Aug 13 00:48:52.411401 kernel: pci da07:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 13 00:48:52.411578 kernel: pci da07:00:02.0: enabling Extended Tags Aug 13 00:48:52.411756 kernel: pci da07:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at da07:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Aug 13 00:48:52.411922 kernel: pci_bus da07:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 00:48:52.412074 kernel: pci da07:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 13 00:48:52.634339 kernel: mlx5_core da07:00:02.0: enabling device (0000 -> 0002) Aug 13 00:48:52.912686 kernel: mlx5_core da07:00:02.0: firmware version: 14.30.5000 Aug 13 00:48:52.912899 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (440) Aug 13 00:48:52.912927 kernel: mlx5_core da07:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Aug 13 00:48:52.913090 kernel: mlx5_core da07:00:02.0: Supported tc offload range - chains: 1, prios: 1 Aug 13 00:48:52.913266 kernel: mlx5_core da07:00:02.0: mlx5e_tc_post_act_init:40:(pid 16): firmware level support is missing Aug 13 00:48:52.913431 kernel: hv_netvsc 7ced8d6c-64a8-7ced-8d6c-64a87ced8d6c eth0: VF registering: eth1 Aug 13 00:48:52.913588 kernel: mlx5_core da07:00:02.0 eth1: joined to eth0 Aug 13 00:48:52.679995 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:48:52.710161 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:48:52.917571 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:48:52.934012 kernel: mlx5_core da07:00:02.0 enP55815s1: renamed from eth1 Aug 13 00:48:52.946200 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:48:52.953290 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:48:52.958527 systemd[1]: Starting disk-uuid.service... Aug 13 00:48:52.975720 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:48:52.985734 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:48:53.994611 disk-uuid[561]: The operation has completed successfully. Aug 13 00:48:53.997166 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:48:54.072249 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:48:54.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:54.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:54.072355 systemd[1]: Finished disk-uuid.service. Aug 13 00:48:54.084253 systemd[1]: Starting verity-setup.service... Aug 13 00:48:54.133197 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 00:48:54.499267 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:48:54.503073 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:48:54.510020 systemd[1]: Finished verity-setup.service. Aug 13 00:48:54.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:54.585870 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:48:54.586310 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:48:54.590061 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:48:54.594095 systemd[1]: Starting ignition-setup.service... Aug 13 00:48:54.599245 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:48:54.623396 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:48:54.623455 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:48:54.623474 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:48:54.669825 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:48:54.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:54.674000 audit: BPF prog-id=9 op=LOAD Aug 13 00:48:54.675040 systemd[1]: Starting systemd-networkd.service... Aug 13 00:48:54.702553 systemd-networkd[825]: lo: Link UP Aug 13 00:48:54.702747 systemd-networkd[825]: lo: Gained carrier Aug 13 00:48:54.706843 systemd-networkd[825]: Enumeration completed Aug 13 00:48:54.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:54.706960 systemd[1]: Started systemd-networkd.service. Aug 13 00:48:54.709186 systemd[1]: Reached target network.target. Aug 13 00:48:54.712370 systemd[1]: Starting iscsiuio.service... Aug 13 00:48:54.721933 systemd[1]: Started iscsiuio.service. Aug 13 00:48:54.722153 systemd-networkd[825]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:48:54.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:54.729874 systemd[1]: Starting iscsid.service... Aug 13 00:48:54.736384 iscsid[830]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:48:54.736384 iscsid[830]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:48:54.736384 iscsid[830]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:48:54.736384 iscsid[830]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:48:54.736384 iscsid[830]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:48:54.763567 iscsid[830]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:48:54.755846 systemd[1]: Started iscsid.service. Aug 13 00:48:54.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:54.768939 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:48:54.779427 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:48:54.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:54.783662 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:48:54.792869 kernel: mlx5_core da07:00:02.0 enP55815s1: Link up Aug 13 00:48:54.788550 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:48:54.793069 systemd[1]: Reached target remote-fs.target. Aug 13 00:48:54.800053 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:48:54.808344 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:48:54.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:54.828531 kernel: hv_netvsc 7ced8d6c-64a8-7ced-8d6c-64a87ced8d6c eth0: Data path switched to VF: enP55815s1 Aug 13 00:48:54.828767 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:48:54.828948 systemd-networkd[825]: enP55815s1: Link UP Aug 13 00:48:54.829116 systemd-networkd[825]: eth0: Link UP Aug 13 00:48:54.829333 systemd-networkd[825]: eth0: Gained carrier Aug 13 00:48:54.835015 systemd-networkd[825]: enP55815s1: Gained carrier Aug 13 00:48:54.854785 systemd-networkd[825]: eth0: DHCPv4 address 10.200.4.36/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 00:48:54.948477 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:48:55.047338 systemd[1]: Finished ignition-setup.service. Aug 13 00:48:55.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:55.052574 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:48:56.438111 systemd-networkd[825]: eth0: Gained IPv6LL Aug 13 00:48:58.668612 ignition[852]: Ignition 2.14.0 Aug 13 00:48:58.668629 ignition[852]: Stage: fetch-offline Aug 13 00:48:58.668753 ignition[852]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:48:58.668804 ignition[852]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:48:58.771119 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:48:58.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:58.772632 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:48:58.796858 kernel: kauditd_printk_skb: 18 callbacks suppressed Aug 13 00:48:58.796889 kernel: audit: type=1130 audit(1755046138.775:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:58.771322 ignition[852]: parsed url from cmdline: "" Aug 13 00:48:58.777762 systemd[1]: Starting ignition-fetch.service... Aug 13 00:48:58.771326 ignition[852]: no config URL provided Aug 13 00:48:58.771335 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:48:58.771344 ignition[852]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:48:58.771351 ignition[852]: failed to fetch config: resource requires networking Aug 13 00:48:58.771741 ignition[852]: Ignition finished successfully Aug 13 00:48:58.786313 ignition[858]: Ignition 2.14.0 Aug 13 00:48:58.786320 ignition[858]: Stage: fetch Aug 13 00:48:58.786448 ignition[858]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:48:58.786477 ignition[858]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:48:58.791766 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:48:58.792246 ignition[858]: parsed url from cmdline: "" Aug 13 00:48:58.792255 ignition[858]: no config URL provided Aug 13 00:48:58.792279 ignition[858]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:48:58.792292 ignition[858]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:48:58.792354 ignition[858]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 00:48:58.871237 ignition[858]: GET result: OK Aug 13 00:48:58.872747 ignition[858]: config has been read from IMDS userdata Aug 13 00:48:58.872798 ignition[858]: parsing config with SHA512: 14f7c4aac0ba0bb4de0b8660cd9a9444267abbc3fdf37ff55b63b0dfeca0f9cda46de2a5ffde33313112840f87ddfd4c2f5b87e5464c9a29d6ab99d8a65b4278 Aug 13 00:48:58.879081 unknown[858]: fetched base config from "system" Aug 13 00:48:58.879098 unknown[858]: fetched base config from "system" Aug 13 00:48:58.879109 unknown[858]: fetched user config from "azure" Aug 13 00:48:58.885278 ignition[858]: fetch: fetch complete Aug 13 00:48:58.885288 ignition[858]: fetch: fetch passed Aug 13 00:48:58.885347 ignition[858]: Ignition finished successfully Aug 13 00:48:58.889894 systemd[1]: Finished ignition-fetch.service. Aug 13 00:48:58.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:58.893387 systemd[1]: Starting ignition-kargs.service... Aug 13 00:48:58.911649 kernel: audit: type=1130 audit(1755046138.892:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:58.918512 ignition[864]: Ignition 2.14.0 Aug 13 00:48:58.918522 ignition[864]: Stage: kargs Aug 13 00:48:58.918651 ignition[864]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:48:58.918681 ignition[864]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:48:58.928421 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:48:58.932384 ignition[864]: kargs: kargs passed Aug 13 00:48:58.932439 ignition[864]: Ignition finished successfully Aug 13 00:48:58.936437 systemd[1]: Finished ignition-kargs.service. Aug 13 00:48:58.939317 systemd[1]: Starting ignition-disks.service... Aug 13 00:48:58.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:58.955527 ignition[870]: Ignition 2.14.0 Aug 13 00:48:58.959158 kernel: audit: type=1130 audit(1755046138.937:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:58.955539 ignition[870]: Stage: disks Aug 13 00:48:58.955680 ignition[870]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:48:58.955715 ignition[870]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:48:58.964979 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:48:58.969236 ignition[870]: disks: disks passed Aug 13 00:48:58.969291 ignition[870]: Ignition finished successfully Aug 13 00:48:58.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:58.971935 systemd[1]: Finished ignition-disks.service. Aug 13 00:48:58.990862 kernel: audit: type=1130 audit(1755046138.973:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:58.975836 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:48:58.990822 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:48:58.993066 systemd[1]: Reached target local-fs.target. Aug 13 00:48:58.995018 systemd[1]: Reached target sysinit.target. Aug 13 00:48:58.999050 systemd[1]: Reached target basic.target. Aug 13 00:48:59.002739 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:48:59.080547 systemd-fsck[878]: ROOT: clean, 629/7326000 files, 481083/7359488 blocks Aug 13 00:48:59.086570 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:48:59.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:59.091918 systemd[1]: Mounting sysroot.mount... Aug 13 00:48:59.106516 kernel: audit: type=1130 audit(1755046139.090:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:48:59.121728 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:48:59.121874 systemd[1]: Mounted sysroot.mount. Aug 13 00:48:59.125410 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:48:59.169194 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:48:59.175093 systemd[1]: Starting flatcar-metadata-hostname.service... Aug 13 00:48:59.179791 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:48:59.180726 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:48:59.189562 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:48:59.252301 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:48:59.257913 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:48:59.269734 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (889) Aug 13 00:48:59.278721 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:48:59.278751 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:48:59.278766 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:48:59.284609 initrd-setup-root[894]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:48:59.290125 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:48:59.325669 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:48:59.353080 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:48:59.376201 initrd-setup-root[936]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:49:00.074226 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:49:00.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:00.079217 systemd[1]: Starting ignition-mount.service... Aug 13 00:49:00.091291 kernel: audit: type=1130 audit(1755046140.077:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:00.094660 systemd[1]: Starting sysroot-boot.service... Aug 13 00:49:00.099171 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Aug 13 00:49:00.101859 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Aug 13 00:49:00.121117 systemd[1]: Finished sysroot-boot.service. Aug 13 00:49:00.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:00.136730 kernel: audit: type=1130 audit(1755046140.124:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:00.171031 ignition[958]: INFO : Ignition 2.14.0 Aug 13 00:49:00.171031 ignition[958]: INFO : Stage: mount Aug 13 00:49:00.175174 ignition[958]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:49:00.175174 ignition[958]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:49:00.188469 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:49:00.192467 ignition[958]: INFO : mount: mount passed Aug 13 00:49:00.194486 ignition[958]: INFO : Ignition finished successfully Aug 13 00:49:00.197225 systemd[1]: Finished ignition-mount.service. Aug 13 00:49:00.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:00.213751 kernel: audit: type=1130 audit(1755046140.199:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:00.929141 coreos-metadata[888]: Aug 13 00:49:00.928 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:49:00.952307 coreos-metadata[888]: Aug 13 00:49:00.952 INFO Fetch successful Aug 13 00:49:00.989633 coreos-metadata[888]: Aug 13 00:49:00.989 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:49:01.004984 coreos-metadata[888]: Aug 13 00:49:01.004 INFO Fetch successful Aug 13 00:49:01.026183 coreos-metadata[888]: Aug 13 00:49:01.026 INFO wrote hostname ci-3510.3.8-a-09b422438d to /sysroot/etc/hostname Aug 13 00:49:01.032646 systemd[1]: Finished flatcar-metadata-hostname.service. Aug 13 00:49:01.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:01.035878 systemd[1]: Starting ignition-files.service... Aug 13 00:49:01.051874 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:49:01.056720 kernel: audit: type=1130 audit(1755046141.034:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:01.073729 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (967) Aug 13 00:49:01.073764 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:49:01.081391 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:49:01.081414 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:49:01.092406 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:49:01.104367 ignition[986]: INFO : Ignition 2.14.0 Aug 13 00:49:01.104367 ignition[986]: INFO : Stage: files Aug 13 00:49:01.108377 ignition[986]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:49:01.108377 ignition[986]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:49:01.121645 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:49:01.140416 ignition[986]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:49:01.159214 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:49:01.159214 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:49:01.222331 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:49:01.226038 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:49:01.252924 unknown[986]: wrote ssh authorized keys file for user: core Aug 13 00:49:01.255777 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:49:01.269029 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:49:01.274489 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 00:49:01.704202 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:49:01.885836 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:49:02.185962 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:49:02.191005 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:49:02.382644 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:49:02.431969 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:49:02.437689 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:49:02.437689 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:49:02.437689 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:49:02.437689 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:49:02.437689 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:49:02.437689 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:49:02.437689 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:49:02.437689 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:49:02.473390 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:49:02.473390 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:49:02.473390 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:49:02.473390 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:49:02.473390 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Aug 13 00:49:02.473390 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:49:02.473390 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1774481409" Aug 13 00:49:02.473390 ignition[986]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1774481409": device or resource busy Aug 13 00:49:02.473390 ignition[986]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1774481409", trying btrfs: device or resource busy Aug 13 00:49:02.473390 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1774481409" Aug 13 00:49:02.524895 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1774481409" Aug 13 00:49:02.524895 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1774481409" Aug 13 00:49:02.524895 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1774481409" Aug 13 00:49:02.524895 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Aug 13 00:49:02.524895 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Aug 13 00:49:02.524895 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:49:02.524895 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem118235501" Aug 13 00:49:02.524895 ignition[986]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem118235501": device or resource busy Aug 13 00:49:02.524895 ignition[986]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem118235501", trying btrfs: device or resource busy Aug 13 00:49:02.524895 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem118235501" Aug 13 00:49:02.524895 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem118235501" Aug 13 00:49:02.524895 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem118235501" Aug 13 00:49:02.524895 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem118235501" Aug 13 00:49:02.524895 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Aug 13 00:49:02.524895 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:49:02.479147 systemd[1]: mnt-oem1774481409.mount: Deactivated successfully. Aug 13 00:49:02.594613 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 00:49:02.956620 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Aug 13 00:49:03.156601 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:49:03.156601 ignition[986]: INFO : files: op(14): [started] processing unit "waagent.service" Aug 13 00:49:03.156601 ignition[986]: INFO : files: op(14): [finished] processing unit "waagent.service" Aug 13 00:49:03.156601 ignition[986]: INFO : files: op(15): [started] processing unit "nvidia.service" Aug 13 00:49:03.156601 ignition[986]: INFO : files: op(15): [finished] processing unit "nvidia.service" Aug 13 00:49:03.190252 kernel: audit: type=1130 audit(1755046143.164:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.190336 ignition[986]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Aug 13 00:49:03.190336 ignition[986]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:49:03.190336 ignition[986]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:49:03.190336 ignition[986]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Aug 13 00:49:03.190336 ignition[986]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Aug 13 00:49:03.190336 ignition[986]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Aug 13 00:49:03.190336 ignition[986]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Aug 13 00:49:03.190336 ignition[986]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Aug 13 00:49:03.190336 ignition[986]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:49:03.190336 ignition[986]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:49:03.190336 ignition[986]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:49:03.190336 ignition[986]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:49:03.190336 ignition[986]: INFO : files: files passed Aug 13 00:49:03.190336 ignition[986]: INFO : Ignition finished successfully Aug 13 00:49:03.160371 systemd[1]: Finished ignition-files.service. Aug 13 00:49:03.166495 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:49:03.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.249238 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:49:03.185106 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:49:03.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.188580 systemd[1]: Starting ignition-quench.service... Aug 13 00:49:03.195939 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:49:03.242575 systemd[1]: Reached target ignition-complete.target. Aug 13 00:49:03.245687 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:49:03.247765 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:49:03.247890 systemd[1]: Finished ignition-quench.service. Aug 13 00:49:03.280450 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:49:03.280531 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:49:03.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.286727 systemd[1]: Reached target initrd-fs.target. Aug 13 00:49:03.290514 systemd[1]: Reached target initrd.target. Aug 13 00:49:03.294033 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:49:03.297622 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:49:03.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.308051 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:49:03.310932 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:49:03.320605 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:49:03.322644 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:49:03.326462 systemd[1]: Stopped target timers.target. Aug 13 00:49:03.330427 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:49:03.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.330555 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:49:03.334116 systemd[1]: Stopped target initrd.target. Aug 13 00:49:03.338109 systemd[1]: Stopped target basic.target. Aug 13 00:49:03.341922 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:49:03.345865 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:49:03.349506 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:49:03.353645 systemd[1]: Stopped target remote-fs.target. Aug 13 00:49:03.357489 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:49:03.361406 systemd[1]: Stopped target sysinit.target. Aug 13 00:49:03.365104 systemd[1]: Stopped target local-fs.target. Aug 13 00:49:03.368992 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:49:03.372591 systemd[1]: Stopped target swap.target. Aug 13 00:49:03.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.376054 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:49:03.376205 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:49:03.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.379995 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:49:03.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.383373 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:49:03.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.383508 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:49:03.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.387988 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:49:03.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.423290 iscsid[830]: iscsid shutting down. Aug 13 00:49:03.388114 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:49:03.391985 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:49:03.392106 systemd[1]: Stopped ignition-files.service. Aug 13 00:49:03.432208 ignition[1025]: INFO : Ignition 2.14.0 Aug 13 00:49:03.432208 ignition[1025]: INFO : Stage: umount Aug 13 00:49:03.432208 ignition[1025]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:49:03.432208 ignition[1025]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:49:03.395874 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:49:03.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.449627 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:49:03.449627 ignition[1025]: INFO : umount: umount passed Aug 13 00:49:03.449627 ignition[1025]: INFO : Ignition finished successfully Aug 13 00:49:03.396001 systemd[1]: Stopped flatcar-metadata-hostname.service. Aug 13 00:49:03.401300 systemd[1]: Stopping ignition-mount.service... Aug 13 00:49:03.404998 systemd[1]: Stopping iscsid.service... Aug 13 00:49:03.410039 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:49:03.411989 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:49:03.412178 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:49:03.414521 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:49:03.414672 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:49:03.418938 systemd[1]: iscsid.service: Deactivated successfully. Aug 13 00:49:03.419053 systemd[1]: Stopped iscsid.service. Aug 13 00:49:03.421507 systemd[1]: Stopping iscsiuio.service... Aug 13 00:49:03.441034 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 00:49:03.441120 systemd[1]: Stopped iscsiuio.service. Aug 13 00:49:03.449684 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:49:03.458155 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:49:03.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.485663 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:49:03.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.486141 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:49:03.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.486216 systemd[1]: Stopped ignition-mount.service. Aug 13 00:49:03.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.490748 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:49:03.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.490793 systemd[1]: Stopped ignition-disks.service. Aug 13 00:49:03.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.494201 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:49:03.494251 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:49:03.498063 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:49:03.498113 systemd[1]: Stopped ignition-fetch.service. Aug 13 00:49:03.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.502749 systemd[1]: Stopped target network.target. Aug 13 00:49:03.504606 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:49:03.504652 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:49:03.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.508444 systemd[1]: Stopped target paths.target. Aug 13 00:49:03.510373 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:49:03.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.514743 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:49:03.518878 systemd[1]: Stopped target slices.target. Aug 13 00:49:03.549000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:49:03.519771 systemd[1]: Stopped target sockets.target. Aug 13 00:49:03.520189 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:49:03.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.520228 systemd[1]: Closed iscsid.socket. Aug 13 00:49:03.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.520593 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:49:03.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.520622 systemd[1]: Closed iscsiuio.socket. Aug 13 00:49:03.521147 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:49:03.521184 systemd[1]: Stopped ignition-setup.service. Aug 13 00:49:03.521818 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:49:03.522061 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:49:03.535781 systemd-networkd[825]: eth0: DHCPv6 lease lost Aug 13 00:49:03.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.582000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:49:03.536835 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:49:03.536929 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:49:03.543263 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:49:03.543359 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:49:03.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.550493 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:49:03.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.550533 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:49:03.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.554157 systemd[1]: Stopping network-cleanup.service... Aug 13 00:49:03.557481 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:49:03.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.557534 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:49:03.631450 kernel: hv_netvsc 7ced8d6c-64a8-7ced-8d6c-64a87ced8d6c eth0: Data path switched from VF: enP55815s1 Aug 13 00:49:03.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.561474 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:49:03.561526 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:49:03.565002 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:49:03.565052 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:49:03.569427 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:49:03.573759 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:49:03.579063 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:49:03.579238 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:49:03.585079 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:49:03.585117 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:49:03.589578 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:49:03.589620 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:49:03.596085 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:49:03.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.596131 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:49:03.599756 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:49:03.599810 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:49:03.603979 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:49:03.604027 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:49:03.608756 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:49:03.611784 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:49:03.611849 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Aug 13 00:49:03.614410 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:49:03.614460 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:49:03.616801 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:49:03.616857 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:49:03.622483 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:49:03.622948 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:49:03.623030 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:49:03.657821 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:49:03.657907 systemd[1]: Stopped network-cleanup.service. Aug 13 00:49:03.753796 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:49:03.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.753934 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:49:03.759152 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:49:03.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:03.763133 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:49:03.763201 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:49:03.768151 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:49:03.782109 systemd[1]: Switching root. Aug 13 00:49:03.807745 systemd-journald[183]: Journal stopped Aug 13 00:49:28.191185 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 13 00:49:28.191217 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:49:28.191230 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:49:28.191240 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:49:28.191251 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:49:28.191260 kernel: SELinux: policy capability open_perms=1 Aug 13 00:49:28.191282 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:49:28.191300 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:49:28.191316 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:49:28.191331 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:49:28.191346 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:49:28.191361 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:49:28.191378 kernel: kauditd_printk_skb: 43 callbacks suppressed Aug 13 00:49:28.191395 kernel: audit: type=1403 audit(1755046146.660:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:49:28.191417 systemd[1]: Successfully loaded SELinux policy in 319.296ms. Aug 13 00:49:28.191441 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 36.512ms. Aug 13 00:49:28.191463 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:49:28.191481 systemd[1]: Detected virtualization microsoft. Aug 13 00:49:28.191502 systemd[1]: Detected architecture x86-64. Aug 13 00:49:28.191523 systemd[1]: Detected first boot. Aug 13 00:49:28.191542 systemd[1]: Hostname set to . Aug 13 00:49:28.191559 systemd[1]: Initializing machine ID from random generator. Aug 13 00:49:28.191579 kernel: audit: type=1400 audit(1755046147.578:83): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:49:28.191596 kernel: audit: type=1400 audit(1755046147.594:84): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:49:28.191615 kernel: audit: type=1400 audit(1755046147.594:85): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:49:28.191638 kernel: audit: type=1334 audit(1755046147.607:86): prog-id=10 op=LOAD Aug 13 00:49:28.191657 kernel: audit: type=1334 audit(1755046147.607:87): prog-id=10 op=UNLOAD Aug 13 00:49:28.191678 kernel: audit: type=1334 audit(1755046147.619:88): prog-id=11 op=LOAD Aug 13 00:49:28.191694 kernel: audit: type=1334 audit(1755046147.619:89): prog-id=11 op=UNLOAD Aug 13 00:49:28.191727 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:49:28.191747 kernel: audit: type=1400 audit(1755046149.150:90): avc: denied { associate } for pid=1059 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 00:49:28.191765 kernel: audit: type=1300 audit(1755046149.150:90): arch=c000003e syscall=188 success=yes exit=0 a0=c0000242b2 a1=c00002a3c0 a2=c000028800 a3=32 items=0 ppid=1042 pid=1059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:49:28.191787 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:49:28.191805 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:49:28.191824 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:49:28.191844 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:49:28.191862 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 00:49:28.191881 kernel: audit: type=1334 audit(1755046167.599:92): prog-id=12 op=LOAD Aug 13 00:49:28.191897 kernel: audit: type=1334 audit(1755046167.599:93): prog-id=3 op=UNLOAD Aug 13 00:49:28.191916 kernel: audit: type=1334 audit(1755046167.604:94): prog-id=13 op=LOAD Aug 13 00:49:28.191938 kernel: audit: type=1334 audit(1755046167.609:95): prog-id=14 op=LOAD Aug 13 00:49:28.191955 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:49:28.191977 kernel: audit: type=1334 audit(1755046167.609:96): prog-id=4 op=UNLOAD Aug 13 00:49:28.191993 kernel: audit: type=1334 audit(1755046167.609:97): prog-id=5 op=UNLOAD Aug 13 00:49:28.192010 kernel: audit: type=1131 audit(1755046167.610:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.192027 systemd[1]: Stopped initrd-switch-root.service. Aug 13 00:49:28.192048 kernel: audit: type=1334 audit(1755046167.654:99): prog-id=12 op=UNLOAD Aug 13 00:49:28.192067 kernel: audit: type=1130 audit(1755046167.661:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.192088 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:49:28.192106 kernel: audit: type=1131 audit(1755046167.661:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.192124 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:49:28.192145 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:49:28.192160 systemd[1]: Created slice system-getty.slice. Aug 13 00:49:28.192181 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:49:28.192200 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:49:28.192214 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:49:28.192226 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:49:28.192238 systemd[1]: Created slice user.slice. Aug 13 00:49:28.192251 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:49:28.192260 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:49:28.192273 systemd[1]: Set up automount boot.automount. Aug 13 00:49:28.192283 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:49:28.192295 systemd[1]: Stopped target initrd-switch-root.target. Aug 13 00:49:28.192307 systemd[1]: Stopped target initrd-fs.target. Aug 13 00:49:28.192319 systemd[1]: Stopped target initrd-root-fs.target. Aug 13 00:49:28.192329 systemd[1]: Reached target integritysetup.target. Aug 13 00:49:28.192341 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:49:28.192351 systemd[1]: Reached target remote-fs.target. Aug 13 00:49:28.192363 systemd[1]: Reached target slices.target. Aug 13 00:49:28.192374 systemd[1]: Reached target swap.target. Aug 13 00:49:28.192387 systemd[1]: Reached target torcx.target. Aug 13 00:49:28.192398 systemd[1]: Reached target veritysetup.target. Aug 13 00:49:28.192410 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:49:28.192420 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:49:28.192432 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:49:28.192442 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:49:28.192456 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:49:28.192467 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:49:28.192479 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:49:28.192489 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:49:28.192501 systemd[1]: Mounting media.mount... Aug 13 00:49:28.192511 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:49:28.192524 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:49:28.192533 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:49:28.192546 systemd[1]: Mounting tmp.mount... Aug 13 00:49:28.192558 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:49:28.192570 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:49:28.192580 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:49:28.192592 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:49:28.192602 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:49:28.192616 systemd[1]: Starting modprobe@drm.service... Aug 13 00:49:28.192629 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:49:28.192642 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:49:28.192654 systemd[1]: Starting modprobe@loop.service... Aug 13 00:49:28.192668 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:49:28.192681 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:49:28.192691 systemd[1]: Stopped systemd-fsck-root.service. Aug 13 00:49:28.192733 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:49:28.192746 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:49:28.192757 systemd[1]: Stopped systemd-journald.service. Aug 13 00:49:28.192769 systemd[1]: Starting systemd-journald.service... Aug 13 00:49:28.192780 kernel: loop: module loaded Aug 13 00:49:28.192794 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:49:28.192804 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:49:28.192816 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:49:28.192826 kernel: fuse: init (API version 7.34) Aug 13 00:49:28.192838 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:49:28.192848 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:49:28.192860 systemd[1]: Stopped verity-setup.service. Aug 13 00:49:28.192871 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:49:28.192883 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:49:28.192895 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:49:28.192908 systemd[1]: Mounted media.mount. Aug 13 00:49:28.192921 systemd-journald[1140]: Journal started Aug 13 00:49:28.192968 systemd-journald[1140]: Runtime Journal (/run/log/journal/ba61a60e7d3b4ec09761b5e5e16d8793) is 8.0M, max 159.0M, 151.0M free. Aug 13 00:49:06.660000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:49:07.578000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:49:07.594000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:49:07.594000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:49:07.607000 audit: BPF prog-id=10 op=LOAD Aug 13 00:49:07.607000 audit: BPF prog-id=10 op=UNLOAD Aug 13 00:49:07.619000 audit: BPF prog-id=11 op=LOAD Aug 13 00:49:07.619000 audit: BPF prog-id=11 op=UNLOAD Aug 13 00:49:09.150000 audit[1059]: AVC avc: denied { associate } for pid=1059 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 00:49:09.150000 audit[1059]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0000242b2 a1=c00002a3c0 a2=c000028800 a3=32 items=0 ppid=1042 pid=1059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:49:09.150000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:49:09.158000 audit[1059]: AVC avc: denied { associate } for pid=1059 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 00:49:09.158000 audit[1059]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000024399 a2=1ed a3=0 items=2 ppid=1042 pid=1059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:49:09.158000 audit: CWD cwd="/" Aug 13 00:49:09.158000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:49:09.158000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:49:09.158000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:49:27.599000 audit: BPF prog-id=12 op=LOAD Aug 13 00:49:27.599000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:49:27.604000 audit: BPF prog-id=13 op=LOAD Aug 13 00:49:27.609000 audit: BPF prog-id=14 op=LOAD Aug 13 00:49:27.609000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:49:27.609000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:49:27.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:27.654000 audit: BPF prog-id=12 op=UNLOAD Aug 13 00:49:27.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:27.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.056000 audit: BPF prog-id=15 op=LOAD Aug 13 00:49:28.056000 audit: BPF prog-id=16 op=LOAD Aug 13 00:49:28.056000 audit: BPF prog-id=17 op=LOAD Aug 13 00:49:28.056000 audit: BPF prog-id=13 op=UNLOAD Aug 13 00:49:28.056000 audit: BPF prog-id=14 op=UNLOAD Aug 13 00:49:28.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.187000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:49:28.187000 audit[1140]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff3a5e0ee0 a2=4000 a3=7fff3a5e0f7c items=0 ppid=1 pid=1140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:49:28.187000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:49:09.053906 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:49:27.598085 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:49:09.083612 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:09Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:49:27.598097 systemd[1]: Unnecessary job was removed for dev-sda6.device. Aug 13 00:49:09.083641 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:09Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:49:27.611113 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:49:09.083678 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:09Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Aug 13 00:49:09.083695 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:09Z" level=debug msg="skipped missing lower profile" missing profile=oem Aug 13 00:49:09.083790 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:09Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Aug 13 00:49:09.083806 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:09Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Aug 13 00:49:09.084029 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:09Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Aug 13 00:49:09.084085 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:09Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:49:09.084099 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:09Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:49:09.130836 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:09Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Aug 13 00:49:09.130915 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:09Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Aug 13 00:49:09.130978 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:09Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Aug 13 00:49:09.130997 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:09Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Aug 13 00:49:09.131029 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:09Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Aug 13 00:49:09.131046 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:09Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Aug 13 00:49:23.679826 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:23Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:49:23.680127 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:23Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:49:23.680274 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:23Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:49:23.680487 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:23Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:49:23.680547 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:23Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Aug 13 00:49:23.680613 /usr/lib/systemd/system-generators/torcx-generator[1059]: time="2025-08-13T00:49:23Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Aug 13 00:49:28.197749 systemd[1]: Started systemd-journald.service. Aug 13 00:49:28.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.200033 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:49:28.202129 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:49:28.204300 systemd[1]: Mounted tmp.mount. Aug 13 00:49:28.206193 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:49:28.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.209182 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:49:28.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.211447 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:49:28.211581 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:49:28.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.213996 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:49:28.214143 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:49:28.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.216495 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:49:28.216638 systemd[1]: Finished modprobe@drm.service. Aug 13 00:49:28.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.219019 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:49:28.219162 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:49:28.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.221616 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:49:28.221783 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:49:28.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.226221 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:49:28.226463 systemd[1]: Finished modprobe@loop.service. Aug 13 00:49:28.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.228788 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:49:28.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.231391 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:49:28.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.234373 systemd[1]: Reached target network-pre.target. Aug 13 00:49:28.237420 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:49:28.240914 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:49:28.245517 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:49:28.414391 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:49:28.417973 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:49:28.420334 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:49:28.421454 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:49:28.423681 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:49:28.424827 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:49:28.428725 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:49:28.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.431871 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:49:28.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.434340 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:49:28.436978 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:49:28.440731 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:49:28.443631 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:49:28.466483 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 00:49:28.478702 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:49:28.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.481196 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:49:28.492971 systemd-journald[1140]: Time spent on flushing to /var/log/journal/ba61a60e7d3b4ec09761b5e5e16d8793 is 24.014ms for 1161 entries. Aug 13 00:49:28.492971 systemd-journald[1140]: System Journal (/var/log/journal/ba61a60e7d3b4ec09761b5e5e16d8793) is 8.0M, max 2.6G, 2.6G free. Aug 13 00:49:28.587899 systemd-journald[1140]: Received client request to flush runtime journal. Aug 13 00:49:28.589575 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:49:28.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:28.641582 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:49:29.340253 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:49:29.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:29.344586 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:49:30.104980 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:49:30.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:30.269290 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:49:30.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:30.272000 audit: BPF prog-id=18 op=LOAD Aug 13 00:49:30.272000 audit: BPF prog-id=19 op=LOAD Aug 13 00:49:30.272000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:49:30.272000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:49:30.273756 systemd[1]: Starting systemd-udevd.service... Aug 13 00:49:30.291456 systemd-udevd[1187]: Using default interface naming scheme 'v252'. Aug 13 00:49:31.477790 systemd[1]: Started systemd-udevd.service. Aug 13 00:49:31.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:31.482485 systemd[1]: Starting systemd-networkd.service... Aug 13 00:49:31.480000 audit: BPF prog-id=20 op=LOAD Aug 13 00:49:31.516245 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Aug 13 00:49:31.579733 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:49:31.615000 audit[1201]: AVC avc: denied { confidentiality } for pid=1201 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:49:31.622733 kernel: hv_vmbus: registering driver hv_balloon Aug 13 00:49:31.627737 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 13 00:49:31.615000 audit[1201]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f8e133e0c0 a1=f83c a2=7f3c7a024bc5 a3=5 items=12 ppid=1187 pid=1201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:49:31.615000 audit: CWD cwd="/" Aug 13 00:49:31.615000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:49:31.615000 audit: PATH item=1 name=(null) inode=15660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:49:31.615000 audit: PATH item=2 name=(null) inode=15660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:49:31.615000 audit: PATH item=3 name=(null) inode=15661 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:49:31.615000 audit: PATH item=4 name=(null) inode=15660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:49:31.615000 audit: PATH item=5 name=(null) inode=15662 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:49:31.615000 audit: PATH item=6 name=(null) inode=15660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:49:31.615000 audit: PATH item=7 name=(null) inode=15663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:49:31.615000 audit: PATH item=8 name=(null) inode=15660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:49:31.615000 audit: PATH item=9 name=(null) inode=15664 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:49:31.615000 audit: PATH item=10 name=(null) inode=15660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:49:31.615000 audit: PATH item=11 name=(null) inode=15665 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:49:31.615000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 00:49:31.645263 kernel: hv_vmbus: registering driver hyperv_fb Aug 13 00:49:31.654783 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 13 00:49:31.654834 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 00:49:31.658558 kernel: hv_vmbus: registering driver hv_utils Aug 13 00:49:31.658603 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 13 00:49:31.661787 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 00:49:31.661837 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 00:49:31.661863 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 00:49:31.807413 kernel: Console: switching to colour dummy device 80x25 Aug 13 00:49:31.813118 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:49:31.819000 audit: BPF prog-id=21 op=LOAD Aug 13 00:49:31.819000 audit: BPF prog-id=22 op=LOAD Aug 13 00:49:31.819000 audit: BPF prog-id=23 op=LOAD Aug 13 00:49:31.821324 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:49:31.881611 systemd[1]: Started systemd-userdbd.service. Aug 13 00:49:31.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:32.068025 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Aug 13 00:49:32.170039 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:49:32.184347 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:49:32.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:32.188406 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:49:32.520060 lvm[1263]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:49:32.580086 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:49:32.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:32.583079 systemd[1]: Reached target cryptsetup.target. Aug 13 00:49:32.586873 systemd[1]: Starting lvm2-activation.service... Aug 13 00:49:32.591527 lvm[1264]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:49:32.611823 systemd[1]: Finished lvm2-activation.service. Aug 13 00:49:32.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:32.614117 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:49:32.616431 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:49:32.616467 systemd[1]: Reached target local-fs.target. Aug 13 00:49:32.618846 systemd[1]: Reached target machines.target. Aug 13 00:49:32.622240 systemd[1]: Starting ldconfig.service... Aug 13 00:49:32.653251 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:49:32.653337 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:49:32.654691 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:49:32.658451 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:49:32.662324 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:49:32.665549 systemd[1]: Starting systemd-sysext.service... Aug 13 00:49:32.678956 systemd-networkd[1192]: lo: Link UP Aug 13 00:49:32.678965 systemd-networkd[1192]: lo: Gained carrier Aug 13 00:49:32.679607 systemd-networkd[1192]: Enumeration completed Aug 13 00:49:32.679833 systemd[1]: Started systemd-networkd.service. Aug 13 00:49:32.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:32.683269 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:49:32.712244 systemd-networkd[1192]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:49:32.765022 kernel: mlx5_core da07:00:02.0 enP55815s1: Link up Aug 13 00:49:32.787011 kernel: hv_netvsc 7ced8d6c-64a8-7ced-8d6c-64a87ced8d6c eth0: Data path switched to VF: enP55815s1 Aug 13 00:49:32.787372 systemd-networkd[1192]: enP55815s1: Link UP Aug 13 00:49:32.787546 systemd-networkd[1192]: eth0: Link UP Aug 13 00:49:32.787554 systemd-networkd[1192]: eth0: Gained carrier Aug 13 00:49:32.793216 systemd-networkd[1192]: enP55815s1: Gained carrier Aug 13 00:49:32.822124 systemd-networkd[1192]: eth0: DHCPv4 address 10.200.4.36/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 00:49:33.240507 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1266 (bootctl) Aug 13 00:49:33.242221 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:49:33.392474 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:49:33.413103 kernel: kauditd_printk_skb: 68 callbacks suppressed Aug 13 00:49:33.413187 kernel: audit: type=1130 audit(1755046173.396:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.397107 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:49:33.450715 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:49:33.450936 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:49:33.503012 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 00:49:33.789131 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:49:33.789837 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:49:33.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.804009 kernel: audit: type=1130 audit(1755046173.792:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.837009 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:49:33.855015 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 00:49:33.867916 (sd-sysext)[1279]: Using extensions 'kubernetes'. Aug 13 00:49:33.869053 (sd-sysext)[1279]: Merged extensions into '/usr'. Aug 13 00:49:33.884694 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:49:33.886230 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:49:33.887804 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:49:33.891518 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:49:33.893931 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:49:33.898063 systemd[1]: Starting modprobe@loop.service... Aug 13 00:49:33.899551 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:49:33.899675 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:49:33.899794 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:49:33.903160 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:49:33.904700 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:49:33.904824 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:49:33.906403 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:49:33.906504 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:49:33.906931 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:49:33.907064 systemd[1]: Finished modprobe@loop.service. Aug 13 00:49:33.907825 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:49:33.907930 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:49:33.910240 systemd[1]: Finished systemd-sysext.service. Aug 13 00:49:33.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.912982 systemd[1]: Starting ensure-sysext.service... Aug 13 00:49:33.934060 kernel: audit: type=1130 audit(1755046173.905:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.934124 kernel: audit: type=1131 audit(1755046173.905:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.945753 kernel: audit: type=1130 audit(1755046173.905:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.946260 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:49:33.956009 kernel: audit: type=1131 audit(1755046173.905:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.956069 kernel: audit: type=1130 audit(1755046173.906:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.956093 kernel: audit: type=1131 audit(1755046173.906:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.956116 kernel: audit: type=1130 audit(1755046173.909:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:33.982241 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:49:33.995063 systemd[1]: Reloading. Aug 13 00:49:34.000468 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:49:34.016611 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:49:34.068124 /usr/lib/systemd/system-generators/torcx-generator[1306]: time="2025-08-13T00:49:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:49:34.082281 /usr/lib/systemd/system-generators/torcx-generator[1306]: time="2025-08-13T00:49:34Z" level=info msg="torcx already run" Aug 13 00:49:34.156232 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:49:34.156253 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:49:34.172616 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:49:34.236000 audit: BPF prog-id=24 op=LOAD Aug 13 00:49:34.236000 audit: BPF prog-id=15 op=UNLOAD Aug 13 00:49:34.242000 audit: BPF prog-id=25 op=LOAD Aug 13 00:49:34.244059 kernel: audit: type=1334 audit(1755046174.236:162): prog-id=24 op=LOAD Aug 13 00:49:34.242000 audit: BPF prog-id=26 op=LOAD Aug 13 00:49:34.242000 audit: BPF prog-id=16 op=UNLOAD Aug 13 00:49:34.242000 audit: BPF prog-id=17 op=UNLOAD Aug 13 00:49:34.243000 audit: BPF prog-id=27 op=LOAD Aug 13 00:49:34.243000 audit: BPF prog-id=20 op=UNLOAD Aug 13 00:49:34.244000 audit: BPF prog-id=28 op=LOAD Aug 13 00:49:34.244000 audit: BPF prog-id=21 op=UNLOAD Aug 13 00:49:34.244000 audit: BPF prog-id=29 op=LOAD Aug 13 00:49:34.244000 audit: BPF prog-id=30 op=LOAD Aug 13 00:49:34.244000 audit: BPF prog-id=22 op=UNLOAD Aug 13 00:49:34.244000 audit: BPF prog-id=23 op=UNLOAD Aug 13 00:49:34.246000 audit: BPF prog-id=31 op=LOAD Aug 13 00:49:34.246000 audit: BPF prog-id=32 op=LOAD Aug 13 00:49:34.246000 audit: BPF prog-id=18 op=UNLOAD Aug 13 00:49:34.246000 audit: BPF prog-id=19 op=UNLOAD Aug 13 00:49:34.260711 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:49:34.261007 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:49:34.262671 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:49:34.265530 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:49:34.267911 systemd[1]: Starting modprobe@loop.service... Aug 13 00:49:34.268967 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:49:34.269120 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:49:34.269287 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:49:34.270356 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:49:34.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:34.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:34.270786 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:49:34.277722 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:49:34.277873 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:49:34.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:34.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:34.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:34.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:34.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:34.279557 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:49:34.279671 systemd[1]: Finished modprobe@loop.service. Aug 13 00:49:34.281180 systemd[1]: Finished ensure-sysext.service. Aug 13 00:49:34.282544 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:49:34.282773 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:49:34.283920 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:49:34.286330 systemd[1]: Starting modprobe@drm.service... Aug 13 00:49:34.288145 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:49:34.288237 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:49:34.288354 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:49:34.288468 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:49:34.290934 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:49:34.291131 systemd[1]: Finished modprobe@drm.service. Aug 13 00:49:34.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:34.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:34.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:34.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:34.292408 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:49:34.292522 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:49:34.292701 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:49:34.325320 systemd-networkd[1192]: eth0: Gained IPv6LL Aug 13 00:49:34.330868 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:49:34.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:34.766067 systemd-fsck[1275]: fsck.fat 4.2 (2021-01-31) Aug 13 00:49:34.766067 systemd-fsck[1275]: /dev/sda1: 789 files, 119324/258078 clusters Aug 13 00:49:34.767938 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:49:34.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:34.773393 systemd[1]: Mounting boot.mount... Aug 13 00:49:34.792390 systemd[1]: Mounted boot.mount. Aug 13 00:49:34.812177 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:49:34.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:37.316156 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:49:37.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:37.320866 systemd[1]: Starting audit-rules.service... Aug 13 00:49:37.324423 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:49:37.328362 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:49:37.330000 audit: BPF prog-id=33 op=LOAD Aug 13 00:49:37.333196 systemd[1]: Starting systemd-resolved.service... Aug 13 00:49:37.338000 audit: BPF prog-id=34 op=LOAD Aug 13 00:49:37.341250 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:49:37.345169 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:49:37.370000 audit[1384]: SYSTEM_BOOT pid=1384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:49:37.376131 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:49:37.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:37.423022 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:49:37.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:37.425932 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:49:37.496869 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:49:37.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:37.499612 systemd[1]: Reached target time-set.target. Aug 13 00:49:37.555135 systemd-resolved[1382]: Positive Trust Anchors: Aug 13 00:49:37.555162 systemd-resolved[1382]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:49:37.555212 systemd-resolved[1382]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:49:37.732966 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:49:37.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:37.758862 systemd-timesyncd[1383]: Contacted time server 185.177.149.33:123 (0.flatcar.pool.ntp.org). Aug 13 00:49:37.759025 systemd-timesyncd[1383]: Initial clock synchronization to Wed 2025-08-13 00:49:37.761885 UTC. Aug 13 00:49:37.791503 systemd-resolved[1382]: Using system hostname 'ci-3510.3.8-a-09b422438d'. Aug 13 00:49:37.793187 systemd[1]: Started systemd-resolved.service. Aug 13 00:49:37.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:49:37.795818 systemd[1]: Reached target network.target. Aug 13 00:49:37.798072 systemd[1]: Reached target network-online.target. Aug 13 00:49:37.800325 systemd[1]: Reached target nss-lookup.target. Aug 13 00:49:37.817000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:49:37.817000 audit[1400]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcafd91820 a2=420 a3=0 items=0 ppid=1379 pid=1400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:49:37.817000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:49:37.818716 augenrules[1400]: No rules Aug 13 00:49:37.819199 systemd[1]: Finished audit-rules.service. Aug 13 00:49:44.171483 ldconfig[1265]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:49:44.182498 systemd[1]: Finished ldconfig.service. Aug 13 00:49:44.186490 systemd[1]: Starting systemd-update-done.service... Aug 13 00:49:44.212107 systemd[1]: Finished systemd-update-done.service. Aug 13 00:49:44.214775 systemd[1]: Reached target sysinit.target. Aug 13 00:49:44.217232 systemd[1]: Started motdgen.path. Aug 13 00:49:44.219207 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:49:44.222334 systemd[1]: Started logrotate.timer. Aug 13 00:49:44.224162 systemd[1]: Started mdadm.timer. Aug 13 00:49:44.225758 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:49:44.228147 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:49:44.228187 systemd[1]: Reached target paths.target. Aug 13 00:49:44.229957 systemd[1]: Reached target timers.target. Aug 13 00:49:44.236036 systemd[1]: Listening on dbus.socket. Aug 13 00:49:44.239306 systemd[1]: Starting docker.socket... Aug 13 00:49:44.282129 systemd[1]: Listening on sshd.socket. Aug 13 00:49:44.284437 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:49:44.284953 systemd[1]: Listening on docker.socket. Aug 13 00:49:44.287297 systemd[1]: Reached target sockets.target. Aug 13 00:49:44.289472 systemd[1]: Reached target basic.target. Aug 13 00:49:44.291761 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:49:44.291796 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:49:44.292964 systemd[1]: Starting containerd.service... Aug 13 00:49:44.296267 systemd[1]: Starting dbus.service... Aug 13 00:49:44.299322 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:49:44.302572 systemd[1]: Starting extend-filesystems.service... Aug 13 00:49:44.304757 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:49:44.332205 systemd[1]: Starting kubelet.service... Aug 13 00:49:44.335167 systemd[1]: Starting motdgen.service... Aug 13 00:49:44.338268 systemd[1]: Started nvidia.service. Aug 13 00:49:44.341766 systemd[1]: Starting prepare-helm.service... Aug 13 00:49:44.344813 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:49:44.348063 systemd[1]: Starting sshd-keygen.service... Aug 13 00:49:44.353368 systemd[1]: Starting systemd-logind.service... Aug 13 00:49:44.355479 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:49:44.355602 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:49:44.356171 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:49:44.357148 systemd[1]: Starting update-engine.service... Aug 13 00:49:44.360833 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:49:44.396881 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:49:44.397142 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:49:44.419812 jq[1410]: false Aug 13 00:49:44.421077 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:49:44.421569 jq[1422]: true Aug 13 00:49:44.421297 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:49:44.437267 extend-filesystems[1411]: Found loop1 Aug 13 00:49:44.441533 extend-filesystems[1411]: Found sda Aug 13 00:49:44.441533 extend-filesystems[1411]: Found sda1 Aug 13 00:49:44.441533 extend-filesystems[1411]: Found sda2 Aug 13 00:49:44.441533 extend-filesystems[1411]: Found sda3 Aug 13 00:49:44.441533 extend-filesystems[1411]: Found usr Aug 13 00:49:44.441533 extend-filesystems[1411]: Found sda4 Aug 13 00:49:44.441533 extend-filesystems[1411]: Found sda6 Aug 13 00:49:44.441533 extend-filesystems[1411]: Found sda7 Aug 13 00:49:44.441533 extend-filesystems[1411]: Found sda9 Aug 13 00:49:44.441533 extend-filesystems[1411]: Checking size of /dev/sda9 Aug 13 00:49:44.466907 jq[1432]: true Aug 13 00:49:44.444286 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:49:44.444460 systemd[1]: Finished motdgen.service. Aug 13 00:49:44.523016 systemd-logind[1420]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:49:44.527312 systemd-logind[1420]: New seat seat0. Aug 13 00:49:44.557660 env[1434]: time="2025-08-13T00:49:44.557357621Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:49:44.568439 extend-filesystems[1411]: Old size kept for /dev/sda9 Aug 13 00:49:44.572936 extend-filesystems[1411]: Found sr0 Aug 13 00:49:44.575600 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:49:44.575810 systemd[1]: Finished extend-filesystems.service. Aug 13 00:49:44.581473 tar[1425]: linux-amd64/helm Aug 13 00:49:44.650387 env[1434]: time="2025-08-13T00:49:44.650332646Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:49:44.653400 env[1434]: time="2025-08-13T00:49:44.653363648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:49:44.655731 env[1434]: time="2025-08-13T00:49:44.655691857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:49:44.659851 env[1434]: time="2025-08-13T00:49:44.659823304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:49:44.660321 env[1434]: time="2025-08-13T00:49:44.660293667Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:49:44.660425 env[1434]: time="2025-08-13T00:49:44.660407582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:49:44.660502 env[1434]: time="2025-08-13T00:49:44.660484492Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:49:44.662108 env[1434]: time="2025-08-13T00:49:44.660595507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:49:44.662291 env[1434]: time="2025-08-13T00:49:44.662269528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:49:44.662633 env[1434]: time="2025-08-13T00:49:44.662613074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:49:44.662899 env[1434]: time="2025-08-13T00:49:44.662875309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:49:44.662975 env[1434]: time="2025-08-13T00:49:44.662961020Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:49:44.663119 env[1434]: time="2025-08-13T00:49:44.663101939Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:49:44.663202 env[1434]: time="2025-08-13T00:49:44.663189050Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:49:44.683059 bash[1457]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:49:44.683814 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:49:44.689727 env[1434]: time="2025-08-13T00:49:44.688780043Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:49:44.689727 env[1434]: time="2025-08-13T00:49:44.688832550Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:49:44.689727 env[1434]: time="2025-08-13T00:49:44.688852452Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:49:44.689727 env[1434]: time="2025-08-13T00:49:44.688900459Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:49:44.689727 env[1434]: time="2025-08-13T00:49:44.688919961Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:49:44.689727 env[1434]: time="2025-08-13T00:49:44.688938464Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:49:44.689727 env[1434]: time="2025-08-13T00:49:44.688956966Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:49:44.689727 env[1434]: time="2025-08-13T00:49:44.688975469Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:49:44.693016 env[1434]: time="2025-08-13T00:49:44.691860551Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:49:44.693016 env[1434]: time="2025-08-13T00:49:44.691908258Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:49:44.693016 env[1434]: time="2025-08-13T00:49:44.691928360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:49:44.693016 env[1434]: time="2025-08-13T00:49:44.691946963Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:49:44.693016 env[1434]: time="2025-08-13T00:49:44.692110984Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:49:44.693016 env[1434]: time="2025-08-13T00:49:44.692205597Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:49:44.693016 env[1434]: time="2025-08-13T00:49:44.692515138Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:49:44.693016 env[1434]: time="2025-08-13T00:49:44.692546442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:49:44.693016 env[1434]: time="2025-08-13T00:49:44.692565645Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:49:44.693016 env[1434]: time="2025-08-13T00:49:44.692618052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:49:44.693016 env[1434]: time="2025-08-13T00:49:44.692636054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:49:44.693016 env[1434]: time="2025-08-13T00:49:44.692653556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:49:44.693016 env[1434]: time="2025-08-13T00:49:44.692671759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:49:44.693016 env[1434]: time="2025-08-13T00:49:44.692689561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:49:44.693540 env[1434]: time="2025-08-13T00:49:44.692706563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:49:44.693540 env[1434]: time="2025-08-13T00:49:44.692723766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:49:44.693540 env[1434]: time="2025-08-13T00:49:44.692744468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:49:44.693540 env[1434]: time="2025-08-13T00:49:44.692765071Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:49:44.693540 env[1434]: time="2025-08-13T00:49:44.692901789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:49:44.693540 env[1434]: time="2025-08-13T00:49:44.692920692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:49:44.693540 env[1434]: time="2025-08-13T00:49:44.692945495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:49:44.693540 env[1434]: time="2025-08-13T00:49:44.692961497Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:49:44.699019 env[1434]: time="2025-08-13T00:49:44.692982300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:49:44.699019 env[1434]: time="2025-08-13T00:49:44.698064174Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:49:44.699019 env[1434]: time="2025-08-13T00:49:44.698100978Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:49:44.699019 env[1434]: time="2025-08-13T00:49:44.698147685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:49:44.699204 env[1434]: time="2025-08-13T00:49:44.698407119Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:49:44.699204 env[1434]: time="2025-08-13T00:49:44.698480029Z" level=info msg="Connect containerd service" Aug 13 00:49:44.699204 env[1434]: time="2025-08-13T00:49:44.698528135Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:49:44.736776 env[1434]: time="2025-08-13T00:49:44.700667519Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:49:44.736776 env[1434]: time="2025-08-13T00:49:44.701003563Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:49:44.736776 env[1434]: time="2025-08-13T00:49:44.701065471Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:49:44.736776 env[1434]: time="2025-08-13T00:49:44.703646414Z" level=info msg="containerd successfully booted in 0.147184s" Aug 13 00:49:44.736776 env[1434]: time="2025-08-13T00:49:44.709237755Z" level=info msg="Start subscribing containerd event" Aug 13 00:49:44.736776 env[1434]: time="2025-08-13T00:49:44.709306664Z" level=info msg="Start recovering state" Aug 13 00:49:44.701201 systemd[1]: Started containerd.service. Aug 13 00:49:44.738412 env[1434]: time="2025-08-13T00:49:44.738377918Z" level=info msg="Start event monitor" Aug 13 00:49:44.738547 env[1434]: time="2025-08-13T00:49:44.738533038Z" level=info msg="Start snapshots syncer" Aug 13 00:49:44.738613 env[1434]: time="2025-08-13T00:49:44.738601347Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:49:44.738670 env[1434]: time="2025-08-13T00:49:44.738659355Z" level=info msg="Start streaming server" Aug 13 00:49:44.832653 systemd[1]: nvidia.service: Deactivated successfully. Aug 13 00:49:45.144401 dbus-daemon[1409]: [system] SELinux support is enabled Aug 13 00:49:45.144601 systemd[1]: Started dbus.service. Aug 13 00:49:45.149431 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:49:45.149460 systemd[1]: Reached target system-config.target. Aug 13 00:49:45.152201 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:49:45.152224 systemd[1]: Reached target user-config.target. Aug 13 00:49:45.158126 systemd[1]: Started systemd-logind.service. Aug 13 00:49:45.160937 dbus-daemon[1409]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:49:45.267292 tar[1425]: linux-amd64/LICENSE Aug 13 00:49:45.267566 tar[1425]: linux-amd64/README.md Aug 13 00:49:45.274649 systemd[1]: Finished prepare-helm.service. Aug 13 00:49:45.449100 update_engine[1421]: I0813 00:49:45.431753 1421 main.cc:92] Flatcar Update Engine starting Aug 13 00:49:45.513890 systemd[1]: Started update-engine.service. Aug 13 00:49:45.518957 systemd[1]: Started locksmithd.service. Aug 13 00:49:45.521357 update_engine[1421]: I0813 00:49:45.521050 1421 update_check_scheduler.cc:74] Next update check in 5m32s Aug 13 00:49:45.886543 systemd[1]: Started kubelet.service. Aug 13 00:49:46.125560 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:49:46.154895 systemd[1]: Finished sshd-keygen.service. Aug 13 00:49:46.158978 systemd[1]: Starting issuegen.service... Aug 13 00:49:46.162577 systemd[1]: Started waagent.service. Aug 13 00:49:46.173386 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:49:46.173573 systemd[1]: Finished issuegen.service. Aug 13 00:49:46.177346 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:49:46.203154 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:49:46.207343 systemd[1]: Started getty@tty1.service. Aug 13 00:49:46.211083 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 00:49:46.214708 systemd[1]: Reached target getty.target. Aug 13 00:49:46.216737 systemd[1]: Reached target multi-user.target. Aug 13 00:49:46.221053 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:49:46.234388 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:49:46.234589 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:49:46.237256 systemd[1]: Startup finished in 1.029s (firmware) + 31.051s (loader) + 900ms (kernel) + 15.333s (initrd) + 40.065s (userspace) = 1min 28.381s. Aug 13 00:49:46.555044 kubelet[1515]: E0813 00:49:46.554910 1515 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:49:46.556865 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:49:46.557059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:49:46.557347 systemd[1]: kubelet.service: Consumed 1.143s CPU time. Aug 13 00:49:47.140610 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:49:47.150946 login[1538]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:49:47.152499 login[1539]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:49:47.368091 systemd[1]: Created slice user-500.slice. Aug 13 00:49:47.370203 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:49:47.373677 systemd-logind[1420]: New session 2 of user core. Aug 13 00:49:47.377405 systemd-logind[1420]: New session 1 of user core. Aug 13 00:49:47.414262 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:49:47.416617 systemd[1]: Starting user@500.service... Aug 13 00:49:47.452014 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:49:47.926094 systemd[1542]: Queued start job for default target default.target. Aug 13 00:49:47.926932 systemd[1542]: Reached target paths.target. Aug 13 00:49:47.926970 systemd[1542]: Reached target sockets.target. Aug 13 00:49:47.927009 systemd[1542]: Reached target timers.target. Aug 13 00:49:47.927031 systemd[1542]: Reached target basic.target. Aug 13 00:49:47.927179 systemd[1]: Started user@500.service. Aug 13 00:49:47.928603 systemd[1]: Started session-1.scope. Aug 13 00:49:47.929448 systemd[1]: Started session-2.scope. Aug 13 00:49:47.930399 systemd[1542]: Reached target default.target. Aug 13 00:49:47.930603 systemd[1542]: Startup finished in 472ms. Aug 13 00:49:56.807103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:49:56.807414 systemd[1]: Stopped kubelet.service. Aug 13 00:49:56.807481 systemd[1]: kubelet.service: Consumed 1.143s CPU time. Aug 13 00:49:56.809754 systemd[1]: Starting kubelet.service... Aug 13 00:49:57.629500 systemd[1]: Started kubelet.service. Aug 13 00:49:57.689853 kubelet[1568]: E0813 00:49:57.689808 1568 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:49:57.692817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:49:57.692982 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:49:57.898068 waagent[1533]: 2025-08-13T00:49:57.897843Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Aug 13 00:49:57.932570 waagent[1533]: 2025-08-13T00:49:57.932449Z INFO Daemon Daemon OS: flatcar 3510.3.8 Aug 13 00:49:57.935003 waagent[1533]: 2025-08-13T00:49:57.934923Z INFO Daemon Daemon Python: 3.9.16 Aug 13 00:49:57.937682 waagent[1533]: 2025-08-13T00:49:57.937611Z INFO Daemon Daemon Run daemon Aug 13 00:49:57.940533 waagent[1533]: 2025-08-13T00:49:57.940263Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Aug 13 00:49:57.968077 waagent[1533]: 2025-08-13T00:49:57.967902Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Aug 13 00:49:57.975531 waagent[1533]: 2025-08-13T00:49:57.975416Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 00:49:58.018836 waagent[1533]: 2025-08-13T00:49:57.975917Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 00:49:58.018836 waagent[1533]: 2025-08-13T00:49:57.976944Z INFO Daemon Daemon Using waagent for provisioning Aug 13 00:49:58.018836 waagent[1533]: 2025-08-13T00:49:57.978404Z INFO Daemon Daemon Activate resource disk Aug 13 00:49:58.018836 waagent[1533]: 2025-08-13T00:49:57.979142Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 13 00:49:58.018836 waagent[1533]: 2025-08-13T00:49:57.987129Z INFO Daemon Daemon Found device: None Aug 13 00:49:58.018836 waagent[1533]: 2025-08-13T00:49:57.987914Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 13 00:49:58.018836 waagent[1533]: 2025-08-13T00:49:57.988736Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 13 00:49:58.018836 waagent[1533]: 2025-08-13T00:49:57.990541Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:49:58.018836 waagent[1533]: 2025-08-13T00:49:57.991516Z INFO Daemon Daemon Running default provisioning handler Aug 13 00:49:58.018836 waagent[1533]: 2025-08-13T00:49:58.001230Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Aug 13 00:49:58.018836 waagent[1533]: 2025-08-13T00:49:58.004706Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 00:49:58.018836 waagent[1533]: 2025-08-13T00:49:58.005606Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 00:49:58.018836 waagent[1533]: 2025-08-13T00:49:58.006499Z INFO Daemon Daemon Copying ovf-env.xml Aug 13 00:49:58.173224 waagent[1533]: 2025-08-13T00:49:58.172471Z INFO Daemon Daemon Successfully mounted dvd Aug 13 00:49:58.258823 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 13 00:49:58.298683 waagent[1533]: 2025-08-13T00:49:58.298512Z INFO Daemon Daemon Detect protocol endpoint Aug 13 00:49:58.301958 waagent[1533]: 2025-08-13T00:49:58.301881Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:49:58.305216 waagent[1533]: 2025-08-13T00:49:58.305145Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 13 00:49:58.308587 waagent[1533]: 2025-08-13T00:49:58.308521Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 13 00:49:58.311442 waagent[1533]: 2025-08-13T00:49:58.311376Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 13 00:49:58.314389 waagent[1533]: 2025-08-13T00:49:58.314328Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 13 00:49:58.496394 waagent[1533]: 2025-08-13T00:49:58.496247Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 13 00:49:58.504702 waagent[1533]: 2025-08-13T00:49:58.497183Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 13 00:49:58.504702 waagent[1533]: 2025-08-13T00:49:58.498129Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 13 00:49:58.881236 waagent[1533]: 2025-08-13T00:49:58.881080Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 13 00:49:58.892566 waagent[1533]: 2025-08-13T00:49:58.892482Z INFO Daemon Daemon Forcing an update of the goal state.. Aug 13 00:49:58.897513 waagent[1533]: 2025-08-13T00:49:58.892861Z INFO Daemon Daemon Fetching goal state [incarnation 1] Aug 13 00:49:58.968019 waagent[1533]: 2025-08-13T00:49:58.967862Z INFO Daemon Daemon Found private key matching thumbprint 6AE62A6087A0A50520E299654C6087CFA79AC367 Aug 13 00:49:58.972306 waagent[1533]: 2025-08-13T00:49:58.972227Z INFO Daemon Daemon Fetch goal state completed Aug 13 00:49:59.015785 waagent[1533]: 2025-08-13T00:49:59.015697Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 017bb882-76ae-4f75-93d2-dffa73de02a1 New eTag: 3065632420829098480] Aug 13 00:49:59.021803 waagent[1533]: 2025-08-13T00:49:59.021727Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Aug 13 00:49:59.035307 waagent[1533]: 2025-08-13T00:49:59.035244Z INFO Daemon Daemon Starting provisioning Aug 13 00:49:59.038096 waagent[1533]: 2025-08-13T00:49:59.038034Z INFO Daemon Daemon Handle ovf-env.xml. Aug 13 00:49:59.040705 waagent[1533]: 2025-08-13T00:49:59.040645Z INFO Daemon Daemon Set hostname [ci-3510.3.8-a-09b422438d] Aug 13 00:49:59.078753 waagent[1533]: 2025-08-13T00:49:59.078601Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-a-09b422438d] Aug 13 00:49:59.082961 waagent[1533]: 2025-08-13T00:49:59.082858Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 13 00:49:59.087095 waagent[1533]: 2025-08-13T00:49:59.087009Z INFO Daemon Daemon Primary interface is [eth0] Aug 13 00:49:59.102113 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Aug 13 00:49:59.102378 systemd[1]: Stopped systemd-networkd-wait-online.service. Aug 13 00:49:59.102450 systemd[1]: Stopping systemd-networkd-wait-online.service... Aug 13 00:49:59.102831 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:49:59.107041 systemd-networkd[1192]: eth0: DHCPv6 lease lost Aug 13 00:49:59.108361 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:49:59.108512 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:49:59.110964 systemd[1]: Starting systemd-networkd.service... Aug 13 00:49:59.143743 systemd-networkd[1597]: enP55815s1: Link UP Aug 13 00:49:59.143754 systemd-networkd[1597]: enP55815s1: Gained carrier Aug 13 00:49:59.145248 systemd-networkd[1597]: eth0: Link UP Aug 13 00:49:59.145258 systemd-networkd[1597]: eth0: Gained carrier Aug 13 00:49:59.145701 systemd-networkd[1597]: lo: Link UP Aug 13 00:49:59.145710 systemd-networkd[1597]: lo: Gained carrier Aug 13 00:49:59.146049 systemd-networkd[1597]: eth0: Gained IPv6LL Aug 13 00:49:59.146324 systemd-networkd[1597]: Enumeration completed Aug 13 00:49:59.146432 systemd[1]: Started systemd-networkd.service. Aug 13 00:49:59.148920 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:49:59.156707 waagent[1533]: 2025-08-13T00:49:59.150292Z INFO Daemon Daemon Create user account if not exists Aug 13 00:49:59.156707 waagent[1533]: 2025-08-13T00:49:59.154049Z INFO Daemon Daemon User core already exists, skip useradd Aug 13 00:49:59.157648 systemd-networkd[1597]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:49:59.158090 waagent[1533]: 2025-08-13T00:49:59.157984Z INFO Daemon Daemon Configure sudoer Aug 13 00:49:59.180138 waagent[1533]: 2025-08-13T00:49:59.180037Z INFO Daemon Daemon Configure sshd Aug 13 00:49:59.184155 waagent[1533]: 2025-08-13T00:49:59.180467Z INFO Daemon Daemon Deploy ssh public key. Aug 13 00:49:59.191070 systemd-networkd[1597]: eth0: DHCPv4 address 10.200.4.36/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 00:49:59.194691 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:50:00.329372 waagent[1533]: 2025-08-13T00:50:00.329280Z INFO Daemon Daemon Provisioning complete Aug 13 00:50:00.344242 waagent[1533]: 2025-08-13T00:50:00.344169Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 13 00:50:00.347536 waagent[1533]: 2025-08-13T00:50:00.347468Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 13 00:50:00.352737 waagent[1533]: 2025-08-13T00:50:00.352669Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Aug 13 00:50:00.618916 waagent[1603]: 2025-08-13T00:50:00.618739Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Aug 13 00:50:00.619644 waagent[1603]: 2025-08-13T00:50:00.619577Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:50:00.619795 waagent[1603]: 2025-08-13T00:50:00.619740Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:50:00.631239 waagent[1603]: 2025-08-13T00:50:00.631162Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Aug 13 00:50:00.631399 waagent[1603]: 2025-08-13T00:50:00.631343Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Aug 13 00:50:00.683610 waagent[1603]: 2025-08-13T00:50:00.683492Z INFO ExtHandler ExtHandler Found private key matching thumbprint 6AE62A6087A0A50520E299654C6087CFA79AC367 Aug 13 00:50:00.683892 waagent[1603]: 2025-08-13T00:50:00.683832Z INFO ExtHandler ExtHandler Fetch goal state completed Aug 13 00:50:00.697228 waagent[1603]: 2025-08-13T00:50:00.697167Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: f233c484-1ebd-403f-a0d4-18729bb55a5e New eTag: 3065632420829098480] Aug 13 00:50:00.697734 waagent[1603]: 2025-08-13T00:50:00.697674Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Aug 13 00:50:00.836892 waagent[1603]: 2025-08-13T00:50:00.836727Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 00:50:00.862761 waagent[1603]: 2025-08-13T00:50:00.862653Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1603 Aug 13 00:50:00.866452 waagent[1603]: 2025-08-13T00:50:00.866382Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 00:50:00.867618 waagent[1603]: 2025-08-13T00:50:00.867557Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 13 00:50:00.998721 waagent[1603]: 2025-08-13T00:50:00.998592Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 00:50:00.999108 waagent[1603]: 2025-08-13T00:50:00.999040Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 00:50:01.007354 waagent[1603]: 2025-08-13T00:50:01.007297Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 00:50:01.007834 waagent[1603]: 2025-08-13T00:50:01.007773Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Aug 13 00:50:01.008914 waagent[1603]: 2025-08-13T00:50:01.008846Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Aug 13 00:50:01.010249 waagent[1603]: 2025-08-13T00:50:01.010190Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 00:50:01.010488 waagent[1603]: 2025-08-13T00:50:01.010430Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:50:01.010996 waagent[1603]: 2025-08-13T00:50:01.010911Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:50:01.011520 waagent[1603]: 2025-08-13T00:50:01.011461Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 00:50:01.011815 waagent[1603]: 2025-08-13T00:50:01.011757Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 00:50:01.011815 waagent[1603]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 00:50:01.011815 waagent[1603]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 00:50:01.011815 waagent[1603]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 00:50:01.011815 waagent[1603]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:50:01.011815 waagent[1603]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:50:01.011815 waagent[1603]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:50:01.015037 waagent[1603]: 2025-08-13T00:50:01.014822Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 00:50:01.015177 waagent[1603]: 2025-08-13T00:50:01.015109Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:50:01.015628 waagent[1603]: 2025-08-13T00:50:01.015573Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:50:01.016399 waagent[1603]: 2025-08-13T00:50:01.016339Z INFO EnvHandler ExtHandler Configure routes Aug 13 00:50:01.016559 waagent[1603]: 2025-08-13T00:50:01.016509Z INFO EnvHandler ExtHandler Gateway:None Aug 13 00:50:01.016822 waagent[1603]: 2025-08-13T00:50:01.016769Z INFO EnvHandler ExtHandler Routes:None Aug 13 00:50:01.018327 waagent[1603]: 2025-08-13T00:50:01.018268Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 00:50:01.018422 waagent[1603]: 2025-08-13T00:50:01.018366Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 00:50:01.019290 waagent[1603]: 2025-08-13T00:50:01.019226Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 00:50:01.019386 waagent[1603]: 2025-08-13T00:50:01.019329Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 00:50:01.019919 waagent[1603]: 2025-08-13T00:50:01.019864Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 00:50:01.031133 waagent[1603]: 2025-08-13T00:50:01.031060Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Aug 13 00:50:01.031771 waagent[1603]: 2025-08-13T00:50:01.031719Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Aug 13 00:50:01.032698 waagent[1603]: 2025-08-13T00:50:01.032637Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Aug 13 00:50:01.056018 waagent[1603]: 2025-08-13T00:50:01.055931Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Aug 13 00:50:01.094627 waagent[1603]: 2025-08-13T00:50:01.094550Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1597' Aug 13 00:50:01.218689 waagent[1603]: 2025-08-13T00:50:01.218535Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 00:50:01.218689 waagent[1603]: Executing ['ip', '-a', '-o', 'link']: Aug 13 00:50:01.218689 waagent[1603]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 00:50:01.218689 waagent[1603]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:6c:64:a8 brd ff:ff:ff:ff:ff:ff Aug 13 00:50:01.218689 waagent[1603]: 3: enP55815s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:6c:64:a8 brd ff:ff:ff:ff:ff:ff\ altname enP55815p0s2 Aug 13 00:50:01.218689 waagent[1603]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 00:50:01.218689 waagent[1603]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 00:50:01.218689 waagent[1603]: 2: eth0 inet 10.200.4.36/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 00:50:01.218689 waagent[1603]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 00:50:01.218689 waagent[1603]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Aug 13 00:50:01.218689 waagent[1603]: 2: eth0 inet6 fe80::7eed:8dff:fe6c:64a8/64 scope link \ valid_lft forever preferred_lft forever Aug 13 00:50:01.479856 waagent[1603]: 2025-08-13T00:50:01.479777Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Aug 13 00:50:02.357287 waagent[1533]: 2025-08-13T00:50:02.357122Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Aug 13 00:50:02.362528 waagent[1533]: 2025-08-13T00:50:02.362467Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Aug 13 00:50:03.545322 waagent[1631]: 2025-08-13T00:50:03.545211Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Aug 13 00:50:03.546064 waagent[1631]: 2025-08-13T00:50:03.545978Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Aug 13 00:50:03.546238 waagent[1631]: 2025-08-13T00:50:03.546186Z INFO ExtHandler ExtHandler Python: 3.9.16 Aug 13 00:50:03.546403 waagent[1631]: 2025-08-13T00:50:03.546354Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Aug 13 00:50:03.562014 waagent[1631]: 2025-08-13T00:50:03.561900Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 00:50:03.562445 waagent[1631]: 2025-08-13T00:50:03.562383Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:50:03.562630 waagent[1631]: 2025-08-13T00:50:03.562578Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:50:03.562883 waagent[1631]: 2025-08-13T00:50:03.562830Z INFO ExtHandler ExtHandler Initializing the goal state... Aug 13 00:50:03.575103 waagent[1631]: 2025-08-13T00:50:03.575033Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 00:50:03.583778 waagent[1631]: 2025-08-13T00:50:03.583715Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Aug 13 00:50:03.584733 waagent[1631]: 2025-08-13T00:50:03.584673Z INFO ExtHandler Aug 13 00:50:03.584902 waagent[1631]: 2025-08-13T00:50:03.584850Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 04db5f49-204c-479f-9cac-a6d7f6dbe3d5 eTag: 3065632420829098480 source: Fabric] Aug 13 00:50:03.585644 waagent[1631]: 2025-08-13T00:50:03.585585Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 13 00:50:03.586776 waagent[1631]: 2025-08-13T00:50:03.586713Z INFO ExtHandler Aug 13 00:50:03.586930 waagent[1631]: 2025-08-13T00:50:03.586878Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 13 00:50:03.593313 waagent[1631]: 2025-08-13T00:50:03.593259Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 13 00:50:03.593789 waagent[1631]: 2025-08-13T00:50:03.593736Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Aug 13 00:50:03.612818 waagent[1631]: 2025-08-13T00:50:03.612751Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Aug 13 00:50:03.668404 waagent[1631]: 2025-08-13T00:50:03.668272Z INFO ExtHandler Downloaded certificate {'thumbprint': '6AE62A6087A0A50520E299654C6087CFA79AC367', 'hasPrivateKey': True} Aug 13 00:50:03.669688 waagent[1631]: 2025-08-13T00:50:03.669619Z INFO ExtHandler Fetch goal state from WireServer completed Aug 13 00:50:03.670562 waagent[1631]: 2025-08-13T00:50:03.670498Z INFO ExtHandler ExtHandler Goal state initialization completed. Aug 13 00:50:03.688681 waagent[1631]: 2025-08-13T00:50:03.688579Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Aug 13 00:50:03.696829 waagent[1631]: 2025-08-13T00:50:03.696733Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Aug 13 00:50:03.700358 waagent[1631]: 2025-08-13T00:50:03.700267Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Aug 13 00:50:03.700584 waagent[1631]: 2025-08-13T00:50:03.700530Z INFO ExtHandler ExtHandler Checking state of the firewall Aug 13 00:50:03.853395 waagent[1631]: 2025-08-13T00:50:03.853271Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: Aug 13 00:50:03.853395 waagent[1631]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:50:03.853395 waagent[1631]: pkts bytes target prot opt in out source destination Aug 13 00:50:03.853395 waagent[1631]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:50:03.853395 waagent[1631]: pkts bytes target prot opt in out source destination Aug 13 00:50:03.853395 waagent[1631]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:50:03.853395 waagent[1631]: pkts bytes target prot opt in out source destination Aug 13 00:50:03.853395 waagent[1631]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 00:50:03.853395 waagent[1631]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:50:03.853395 waagent[1631]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:50:03.854569 waagent[1631]: 2025-08-13T00:50:03.854500Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Aug 13 00:50:03.857293 waagent[1631]: 2025-08-13T00:50:03.857194Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Aug 13 00:50:03.857559 waagent[1631]: 2025-08-13T00:50:03.857504Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 00:50:03.857966 waagent[1631]: 2025-08-13T00:50:03.857906Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 00:50:03.866301 waagent[1631]: 2025-08-13T00:50:03.866242Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 00:50:03.866799 waagent[1631]: 2025-08-13T00:50:03.866742Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Aug 13 00:50:03.874226 waagent[1631]: 2025-08-13T00:50:03.874157Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1631 Aug 13 00:50:03.877309 waagent[1631]: 2025-08-13T00:50:03.877243Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 00:50:03.878095 waagent[1631]: 2025-08-13T00:50:03.878036Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Aug 13 00:50:03.878947 waagent[1631]: 2025-08-13T00:50:03.878887Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Aug 13 00:50:03.881507 waagent[1631]: 2025-08-13T00:50:03.881443Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Aug 13 00:50:03.881841 waagent[1631]: 2025-08-13T00:50:03.881786Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Aug 13 00:50:03.883154 waagent[1631]: 2025-08-13T00:50:03.883094Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 00:50:03.883585 waagent[1631]: 2025-08-13T00:50:03.883527Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:50:03.883759 waagent[1631]: 2025-08-13T00:50:03.883708Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:50:03.884298 waagent[1631]: 2025-08-13T00:50:03.884243Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 00:50:03.884631 waagent[1631]: 2025-08-13T00:50:03.884574Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 00:50:03.884631 waagent[1631]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 00:50:03.884631 waagent[1631]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 00:50:03.884631 waagent[1631]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 00:50:03.884631 waagent[1631]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:50:03.884631 waagent[1631]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:50:03.884631 waagent[1631]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:50:03.887311 waagent[1631]: 2025-08-13T00:50:03.887208Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 00:50:03.888597 waagent[1631]: 2025-08-13T00:50:03.888542Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:50:03.888965 waagent[1631]: 2025-08-13T00:50:03.888911Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 00:50:03.889202 waagent[1631]: 2025-08-13T00:50:03.889149Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 00:50:03.889463 waagent[1631]: 2025-08-13T00:50:03.889396Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:50:03.892805 waagent[1631]: 2025-08-13T00:50:03.892695Z INFO EnvHandler ExtHandler Configure routes Aug 13 00:50:03.893317 waagent[1631]: 2025-08-13T00:50:03.893258Z INFO EnvHandler ExtHandler Gateway:None Aug 13 00:50:03.893487 waagent[1631]: 2025-08-13T00:50:03.893438Z INFO EnvHandler ExtHandler Routes:None Aug 13 00:50:03.898218 waagent[1631]: 2025-08-13T00:50:03.897974Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 00:50:03.900432 waagent[1631]: 2025-08-13T00:50:03.900153Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 00:50:03.903454 waagent[1631]: 2025-08-13T00:50:03.903381Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 00:50:03.920309 waagent[1631]: 2025-08-13T00:50:03.920244Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 00:50:03.920309 waagent[1631]: Executing ['ip', '-a', '-o', 'link']: Aug 13 00:50:03.920309 waagent[1631]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 00:50:03.920309 waagent[1631]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:6c:64:a8 brd ff:ff:ff:ff:ff:ff Aug 13 00:50:03.920309 waagent[1631]: 3: enP55815s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:6c:64:a8 brd ff:ff:ff:ff:ff:ff\ altname enP55815p0s2 Aug 13 00:50:03.920309 waagent[1631]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 00:50:03.920309 waagent[1631]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 00:50:03.920309 waagent[1631]: 2: eth0 inet 10.200.4.36/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 00:50:03.920309 waagent[1631]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 00:50:03.920309 waagent[1631]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Aug 13 00:50:03.920309 waagent[1631]: 2: eth0 inet6 fe80::7eed:8dff:fe6c:64a8/64 scope link \ valid_lft forever preferred_lft forever Aug 13 00:50:03.920786 waagent[1631]: 2025-08-13T00:50:03.920729Z INFO ExtHandler ExtHandler Downloading agent manifest Aug 13 00:50:03.921258 waagent[1631]: 2025-08-13T00:50:03.921198Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Aug 13 00:50:03.945281 waagent[1631]: 2025-08-13T00:50:03.945211Z INFO ExtHandler ExtHandler Aug 13 00:50:03.946313 waagent[1631]: 2025-08-13T00:50:03.946255Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 23a3d0eb-70e6-46f3-974c-3ae751acccad correlation 5c643d33-4604-4bdb-8b96-b80e97f7764c created: 2025-08-13T00:48:06.699318Z] Aug 13 00:50:03.949642 waagent[1631]: 2025-08-13T00:50:03.949585Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 13 00:50:03.953401 waagent[1631]: 2025-08-13T00:50:03.953342Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 8 ms] Aug 13 00:50:03.976533 waagent[1631]: 2025-08-13T00:50:03.976459Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Aug 13 00:50:03.983029 waagent[1631]: 2025-08-13T00:50:03.982955Z INFO ExtHandler ExtHandler Looking for existing remote access users. Aug 13 00:50:03.986055 waagent[1631]: 2025-08-13T00:50:03.985977Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9496994F-FB22-465D-8466-A31C76E54075;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Aug 13 00:50:07.806820 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:50:07.807188 systemd[1]: Stopped kubelet.service. Aug 13 00:50:07.809297 systemd[1]: Starting kubelet.service... Aug 13 00:50:07.905168 systemd[1]: Started kubelet.service. Aug 13 00:50:08.651441 kubelet[1676]: E0813 00:50:08.651388 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:50:08.653143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:50:08.653310 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:50:18.806863 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:50:18.807219 systemd[1]: Stopped kubelet.service. Aug 13 00:50:18.809365 systemd[1]: Starting kubelet.service... Aug 13 00:50:18.905332 systemd[1]: Started kubelet.service. Aug 13 00:50:19.529371 kubelet[1685]: E0813 00:50:19.529316 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:50:19.531477 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:50:19.531686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:50:19.874986 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Aug 13 00:50:23.645455 systemd[1]: Created slice system-sshd.slice. Aug 13 00:50:23.647403 systemd[1]: Started sshd@0-10.200.4.36:22-10.200.16.10:60704.service. Aug 13 00:50:24.497217 sshd[1692]: Accepted publickey for core from 10.200.16.10 port 60704 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:50:24.498903 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:50:24.504141 systemd-logind[1420]: New session 3 of user core. Aug 13 00:50:24.504633 systemd[1]: Started session-3.scope. Aug 13 00:50:25.008837 systemd[1]: Started sshd@1-10.200.4.36:22-10.200.16.10:60720.service. Aug 13 00:50:25.600025 sshd[1697]: Accepted publickey for core from 10.200.16.10 port 60720 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:50:25.601682 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:50:25.606848 systemd[1]: Started session-4.scope. Aug 13 00:50:25.607460 systemd-logind[1420]: New session 4 of user core. Aug 13 00:50:26.019922 sshd[1697]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:26.023501 systemd[1]: sshd@1-10.200.4.36:22-10.200.16.10:60720.service: Deactivated successfully. Aug 13 00:50:26.024394 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:50:26.025029 systemd-logind[1420]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:50:26.025762 systemd-logind[1420]: Removed session 4. Aug 13 00:50:26.121407 systemd[1]: Started sshd@2-10.200.4.36:22-10.200.16.10:60722.service. Aug 13 00:50:26.711354 sshd[1703]: Accepted publickey for core from 10.200.16.10 port 60722 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:50:26.713052 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:50:26.718399 systemd[1]: Started session-5.scope. Aug 13 00:50:26.718835 systemd-logind[1420]: New session 5 of user core. Aug 13 00:50:27.124567 sshd[1703]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:27.127539 systemd[1]: sshd@2-10.200.4.36:22-10.200.16.10:60722.service: Deactivated successfully. Aug 13 00:50:27.128383 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:50:27.129044 systemd-logind[1420]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:50:27.129773 systemd-logind[1420]: Removed session 5. Aug 13 00:50:27.222913 systemd[1]: Started sshd@3-10.200.4.36:22-10.200.16.10:60736.service. Aug 13 00:50:27.811984 sshd[1709]: Accepted publickey for core from 10.200.16.10 port 60736 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:50:27.813704 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:50:27.819488 systemd[1]: Started session-6.scope. Aug 13 00:50:27.820256 systemd-logind[1420]: New session 6 of user core. Aug 13 00:50:28.231028 sshd[1709]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:28.234350 systemd[1]: sshd@3-10.200.4.36:22-10.200.16.10:60736.service: Deactivated successfully. Aug 13 00:50:28.235352 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:50:28.236124 systemd-logind[1420]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:50:28.237045 systemd-logind[1420]: Removed session 6. Aug 13 00:50:28.329521 systemd[1]: Started sshd@4-10.200.4.36:22-10.200.16.10:60752.service. Aug 13 00:50:28.917869 sshd[1715]: Accepted publickey for core from 10.200.16.10 port 60752 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:50:28.919598 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:50:28.925239 systemd[1]: Started session-7.scope. Aug 13 00:50:28.925837 systemd-logind[1420]: New session 7 of user core. Aug 13 00:50:29.556746 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 00:50:29.557120 systemd[1]: Stopped kubelet.service. Aug 13 00:50:29.559075 systemd[1]: Starting kubelet.service... Aug 13 00:50:30.235929 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:50:30.236364 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:50:30.310612 systemd[1]: Starting docker.service... Aug 13 00:50:30.365445 env[1730]: time="2025-08-13T00:50:30.365389913Z" level=info msg="Starting up" Aug 13 00:50:30.368431 env[1730]: time="2025-08-13T00:50:30.368399433Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:50:30.368431 env[1730]: time="2025-08-13T00:50:30.368425834Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:50:30.368597 env[1730]: time="2025-08-13T00:50:30.368447434Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:50:30.368597 env[1730]: time="2025-08-13T00:50:30.368460034Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:50:30.370611 env[1730]: time="2025-08-13T00:50:30.370588248Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:50:30.370721 env[1730]: time="2025-08-13T00:50:30.370705949Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:50:30.370798 env[1730]: time="2025-08-13T00:50:30.370783450Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:50:30.370855 env[1730]: time="2025-08-13T00:50:30.370844250Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:50:30.373759 systemd[1]: Started kubelet.service. Aug 13 00:50:30.384093 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2696450902-merged.mount: Deactivated successfully. Aug 13 00:50:30.451837 kubelet[1740]: E0813 00:50:30.451802 1740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:50:30.453367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:50:30.453483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:50:30.546598 env[1730]: time="2025-08-13T00:50:30.546502646Z" level=info msg="Loading containers: start." Aug 13 00:50:30.769022 kernel: Initializing XFRM netlink socket Aug 13 00:50:30.808148 env[1730]: time="2025-08-13T00:50:30.808110528Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:50:30.938097 update_engine[1421]: I0813 00:50:30.938047 1421 update_attempter.cc:509] Updating boot flags... Aug 13 00:50:31.007587 systemd-networkd[1597]: docker0: Link UP Aug 13 00:50:31.043893 env[1730]: time="2025-08-13T00:50:31.043313512Z" level=info msg="Loading containers: done." Aug 13 00:50:31.064750 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck16087564-merged.mount: Deactivated successfully. Aug 13 00:50:31.093020 env[1730]: time="2025-08-13T00:50:31.092944229Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:50:31.093820 env[1730]: time="2025-08-13T00:50:31.093446832Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:50:31.093820 env[1730]: time="2025-08-13T00:50:31.093574633Z" level=info msg="Daemon has completed initialization" Aug 13 00:50:31.134836 systemd[1]: Started docker.service. Aug 13 00:50:31.160101 env[1730]: time="2025-08-13T00:50:31.160039857Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:50:35.253663 env[1434]: time="2025-08-13T00:50:35.253615642Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:50:36.058352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4144794148.mount: Deactivated successfully. Aug 13 00:50:37.789569 env[1434]: time="2025-08-13T00:50:37.789507069Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:37.794683 env[1434]: time="2025-08-13T00:50:37.794640092Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:37.797920 env[1434]: time="2025-08-13T00:50:37.797885206Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:37.802498 env[1434]: time="2025-08-13T00:50:37.802459826Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:37.803146 env[1434]: time="2025-08-13T00:50:37.803114528Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 00:50:37.804012 env[1434]: time="2025-08-13T00:50:37.803967832Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:50:39.717405 env[1434]: time="2025-08-13T00:50:39.717336880Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:39.724351 env[1434]: time="2025-08-13T00:50:39.724284006Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:39.735385 env[1434]: time="2025-08-13T00:50:39.735349449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:39.740235 env[1434]: time="2025-08-13T00:50:39.740197367Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:39.740862 env[1434]: time="2025-08-13T00:50:39.740830270Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 00:50:39.741687 env[1434]: time="2025-08-13T00:50:39.741659973Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:50:40.556800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 13 00:50:40.557087 systemd[1]: Stopped kubelet.service. Aug 13 00:50:40.559257 systemd[1]: Starting kubelet.service... Aug 13 00:50:40.659186 systemd[1]: Started kubelet.service. Aug 13 00:50:40.695780 kubelet[1950]: E0813 00:50:40.695730 1950 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:50:40.697289 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:50:40.697453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:50:42.074309 env[1434]: time="2025-08-13T00:50:42.074248908Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:42.080316 env[1434]: time="2025-08-13T00:50:42.080274622Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:42.084242 env[1434]: time="2025-08-13T00:50:42.084209226Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:42.087751 env[1434]: time="2025-08-13T00:50:42.087721009Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:42.088398 env[1434]: time="2025-08-13T00:50:42.088367543Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 00:50:42.089016 env[1434]: time="2025-08-13T00:50:42.088974274Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:50:43.295361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2764365034.mount: Deactivated successfully. Aug 13 00:50:43.920398 env[1434]: time="2025-08-13T00:50:43.920342166Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:43.930838 env[1434]: time="2025-08-13T00:50:43.930796295Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:43.936072 env[1434]: time="2025-08-13T00:50:43.936041060Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:43.939064 env[1434]: time="2025-08-13T00:50:43.939034812Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:43.939414 env[1434]: time="2025-08-13T00:50:43.939384130Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 00:50:43.940244 env[1434]: time="2025-08-13T00:50:43.940216672Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:50:44.572153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1456271085.mount: Deactivated successfully. Aug 13 00:50:45.868140 env[1434]: time="2025-08-13T00:50:45.868080639Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:45.874471 env[1434]: time="2025-08-13T00:50:45.874421742Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:45.880080 env[1434]: time="2025-08-13T00:50:45.880045411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:45.883887 env[1434]: time="2025-08-13T00:50:45.883854193Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:45.884569 env[1434]: time="2025-08-13T00:50:45.884536826Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:50:45.885218 env[1434]: time="2025-08-13T00:50:45.885188557Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:50:46.474323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2753639750.mount: Deactivated successfully. Aug 13 00:50:46.493534 env[1434]: time="2025-08-13T00:50:46.493486820Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:46.500588 env[1434]: time="2025-08-13T00:50:46.500516747Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:46.503899 env[1434]: time="2025-08-13T00:50:46.503845902Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:46.507163 env[1434]: time="2025-08-13T00:50:46.507126954Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:46.507670 env[1434]: time="2025-08-13T00:50:46.507638378Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:50:46.508657 env[1434]: time="2025-08-13T00:50:46.508628624Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:50:47.167642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1509927149.mount: Deactivated successfully. Aug 13 00:50:49.787048 env[1434]: time="2025-08-13T00:50:49.786980602Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:49.793041 env[1434]: time="2025-08-13T00:50:49.792997860Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:49.796972 env[1434]: time="2025-08-13T00:50:49.796945729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:49.800916 env[1434]: time="2025-08-13T00:50:49.800883698Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:49.801625 env[1434]: time="2025-08-13T00:50:49.801593528Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:50:50.806798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Aug 13 00:50:50.807063 systemd[1]: Stopped kubelet.service. Aug 13 00:50:50.808982 systemd[1]: Starting kubelet.service... Aug 13 00:50:50.947981 systemd[1]: Started kubelet.service. Aug 13 00:50:51.577970 kubelet[1977]: E0813 00:50:51.577918 1977 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:50:51.579789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:50:51.579950 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:50:52.580043 systemd[1]: Stopped kubelet.service. Aug 13 00:50:52.583234 systemd[1]: Starting kubelet.service... Aug 13 00:50:52.623754 systemd[1]: Reloading. Aug 13 00:50:52.730494 /usr/lib/systemd/system-generators/torcx-generator[2012]: time="2025-08-13T00:50:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:50:52.730535 /usr/lib/systemd/system-generators/torcx-generator[2012]: time="2025-08-13T00:50:52Z" level=info msg="torcx already run" Aug 13 00:50:52.830160 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:50:52.830186 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:50:52.848546 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:50:52.947170 systemd[1]: Started kubelet.service. Aug 13 00:50:52.950415 systemd[1]: Stopping kubelet.service... Aug 13 00:50:52.950889 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:50:52.951107 systemd[1]: Stopped kubelet.service. Aug 13 00:50:52.952848 systemd[1]: Starting kubelet.service... Aug 13 00:50:53.335884 systemd[1]: Started kubelet.service. Aug 13 00:50:54.115074 kubelet[2079]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:50:54.115074 kubelet[2079]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:50:54.115074 kubelet[2079]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:50:54.115570 kubelet[2079]: I0813 00:50:54.115177 2079 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:50:54.671442 kubelet[2079]: I0813 00:50:54.671393 2079 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:50:54.671442 kubelet[2079]: I0813 00:50:54.671425 2079 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:50:54.671753 kubelet[2079]: I0813 00:50:54.671730 2079 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:50:54.698439 kubelet[2079]: E0813 00:50:54.698400 2079 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:50:54.702429 kubelet[2079]: I0813 00:50:54.702391 2079 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:50:54.709132 kubelet[2079]: E0813 00:50:54.709096 2079 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:50:54.709132 kubelet[2079]: I0813 00:50:54.709123 2079 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:50:54.713951 kubelet[2079]: I0813 00:50:54.713926 2079 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:50:54.714111 kubelet[2079]: I0813 00:50:54.714092 2079 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:50:54.714412 kubelet[2079]: I0813 00:50:54.714383 2079 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:50:54.714741 kubelet[2079]: I0813 00:50:54.714414 2079 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-a-09b422438d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:50:54.714910 kubelet[2079]: I0813 00:50:54.714758 2079 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:50:54.714910 kubelet[2079]: I0813 00:50:54.714772 2079 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:50:54.714910 kubelet[2079]: I0813 00:50:54.714904 2079 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:50:54.718245 kubelet[2079]: I0813 00:50:54.718220 2079 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:50:54.718331 kubelet[2079]: I0813 00:50:54.718257 2079 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:50:54.718331 kubelet[2079]: I0813 00:50:54.718306 2079 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:50:54.718331 kubelet[2079]: I0813 00:50:54.718327 2079 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:50:54.726306 kubelet[2079]: I0813 00:50:54.726287 2079 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:50:54.726912 kubelet[2079]: I0813 00:50:54.726883 2079 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:50:54.732396 kubelet[2079]: W0813 00:50:54.732364 2079 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:50:54.737571 kubelet[2079]: I0813 00:50:54.737549 2079 server.go:1274] "Started kubelet" Aug 13 00:50:54.747416 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 00:50:54.748418 kubelet[2079]: I0813 00:50:54.747548 2079 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:50:54.753373 kubelet[2079]: I0813 00:50:54.753344 2079 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:50:54.754587 kubelet[2079]: I0813 00:50:54.754569 2079 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:50:54.756232 kubelet[2079]: W0813 00:50:54.755960 2079 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-09b422438d&limit=500&resourceVersion=0": dial tcp 10.200.4.36:6443: connect: connection refused Aug 13 00:50:54.756232 kubelet[2079]: E0813 00:50:54.756063 2079 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-09b422438d&limit=500&resourceVersion=0\": dial tcp 10.200.4.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:50:54.759237 kubelet[2079]: I0813 00:50:54.759202 2079 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:50:54.759429 kubelet[2079]: I0813 00:50:54.759393 2079 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:50:54.760347 kubelet[2079]: W0813 00:50:54.760304 2079 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.36:6443: connect: connection refused Aug 13 00:50:54.760434 kubelet[2079]: E0813 00:50:54.760354 2079 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:50:54.760974 kubelet[2079]: I0813 00:50:54.760733 2079 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:50:54.763805 kubelet[2079]: E0813 00:50:54.762979 2079 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-09b422438d\" not found" Aug 13 00:50:54.763805 kubelet[2079]: I0813 00:50:54.763042 2079 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:50:54.763805 kubelet[2079]: I0813 00:50:54.763277 2079 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:50:54.763805 kubelet[2079]: I0813 00:50:54.763340 2079 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:50:54.763805 kubelet[2079]: W0813 00:50:54.763767 2079 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.36:6443: connect: connection refused Aug 13 00:50:54.764071 kubelet[2079]: E0813 00:50:54.763831 2079 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:50:54.764120 kubelet[2079]: I0813 00:50:54.764078 2079 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:50:54.764188 kubelet[2079]: I0813 00:50:54.764165 2079 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:50:54.765901 kubelet[2079]: E0813 00:50:54.765412 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-09b422438d?timeout=10s\": dial tcp 10.200.4.36:6443: connect: connection refused" interval="200ms" Aug 13 00:50:54.767603 kubelet[2079]: I0813 00:50:54.767584 2079 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:50:54.773099 kubelet[2079]: E0813 00:50:54.773076 2079 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:50:54.788513 kubelet[2079]: E0813 00:50:54.786809 2079 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.36:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-a-09b422438d.185b2d4522028a49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-a-09b422438d,UID:ci-3510.3.8-a-09b422438d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-a-09b422438d,},FirstTimestamp:2025-08-13 00:50:54.737525321 +0000 UTC m=+1.394812827,LastTimestamp:2025-08-13 00:50:54.737525321 +0000 UTC m=+1.394812827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-a-09b422438d,}" Aug 13 00:50:54.809818 kubelet[2079]: I0813 00:50:54.809782 2079 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:50:54.809818 kubelet[2079]: I0813 00:50:54.809810 2079 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:50:54.809975 kubelet[2079]: I0813 00:50:54.809829 2079 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:50:54.819858 kubelet[2079]: I0813 00:50:54.819749 2079 policy_none.go:49] "None policy: Start" Aug 13 00:50:54.820598 kubelet[2079]: I0813 00:50:54.820584 2079 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:50:54.820737 kubelet[2079]: I0813 00:50:54.820726 2079 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:50:54.825482 kubelet[2079]: I0813 00:50:54.825442 2079 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:50:54.827439 kubelet[2079]: I0813 00:50:54.827415 2079 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:50:54.828313 kubelet[2079]: I0813 00:50:54.827453 2079 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:50:54.828313 kubelet[2079]: I0813 00:50:54.827474 2079 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:50:54.828313 kubelet[2079]: E0813 00:50:54.827535 2079 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:50:54.831522 kubelet[2079]: W0813 00:50:54.831468 2079 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.36:6443: connect: connection refused Aug 13 00:50:54.831608 kubelet[2079]: E0813 00:50:54.831538 2079 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:50:54.836812 systemd[1]: Created slice kubepods.slice. Aug 13 00:50:54.841080 systemd[1]: Created slice kubepods-burstable.slice. Aug 13 00:50:54.843940 systemd[1]: Created slice kubepods-besteffort.slice. Aug 13 00:50:54.851591 kubelet[2079]: I0813 00:50:54.851576 2079 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:50:54.852269 kubelet[2079]: I0813 00:50:54.852251 2079 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:50:54.852398 kubelet[2079]: I0813 00:50:54.852360 2079 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:50:54.852900 kubelet[2079]: I0813 00:50:54.852885 2079 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:50:54.854711 kubelet[2079]: E0813 00:50:54.854691 2079 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-a-09b422438d\" not found" Aug 13 00:50:54.943131 systemd[1]: Created slice kubepods-burstable-podcb483c91439925d63b840591535a5273.slice. Aug 13 00:50:54.954613 kubelet[2079]: I0813 00:50:54.954591 2079 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-09b422438d" Aug 13 00:50:54.954931 kubelet[2079]: E0813 00:50:54.954903 2079 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.36:6443/api/v1/nodes\": dial tcp 10.200.4.36:6443: connect: connection refused" node="ci-3510.3.8-a-09b422438d" Aug 13 00:50:54.961778 systemd[1]: Created slice kubepods-burstable-pod2ea01a1d5633c7d7a3f7408b4afc52a8.slice. Aug 13 00:50:54.965437 kubelet[2079]: I0813 00:50:54.964826 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cb483c91439925d63b840591535a5273-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-a-09b422438d\" (UID: \"cb483c91439925d63b840591535a5273\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-09b422438d" Aug 13 00:50:54.965437 kubelet[2079]: I0813 00:50:54.964871 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ea01a1d5633c7d7a3f7408b4afc52a8-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-09b422438d\" (UID: \"2ea01a1d5633c7d7a3f7408b4afc52a8\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-09b422438d" Aug 13 00:50:54.965437 kubelet[2079]: I0813 00:50:54.964920 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2ea01a1d5633c7d7a3f7408b4afc52a8-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-a-09b422438d\" (UID: \"2ea01a1d5633c7d7a3f7408b4afc52a8\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-09b422438d" Aug 13 00:50:54.965437 kubelet[2079]: I0813 00:50:54.964946 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ea01a1d5633c7d7a3f7408b4afc52a8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-a-09b422438d\" (UID: \"2ea01a1d5633c7d7a3f7408b4afc52a8\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-09b422438d" Aug 13 00:50:54.965437 kubelet[2079]: I0813 00:50:54.965096 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cb483c91439925d63b840591535a5273-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-a-09b422438d\" (UID: \"cb483c91439925d63b840591535a5273\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-09b422438d" Aug 13 00:50:54.965315 systemd[1]: Created slice kubepods-burstable-pod31d8f0da300ec5f02389efe405622006.slice. Aug 13 00:50:54.965687 kubelet[2079]: I0813 00:50:54.965125 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cb483c91439925d63b840591535a5273-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-a-09b422438d\" (UID: \"cb483c91439925d63b840591535a5273\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-09b422438d" Aug 13 00:50:54.965687 kubelet[2079]: I0813 00:50:54.965169 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ea01a1d5633c7d7a3f7408b4afc52a8-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-09b422438d\" (UID: \"2ea01a1d5633c7d7a3f7408b4afc52a8\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-09b422438d" Aug 13 00:50:54.965687 kubelet[2079]: I0813 00:50:54.965194 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ea01a1d5633c7d7a3f7408b4afc52a8-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-a-09b422438d\" (UID: \"2ea01a1d5633c7d7a3f7408b4afc52a8\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-09b422438d" Aug 13 00:50:54.965687 kubelet[2079]: I0813 00:50:54.965270 2079 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/31d8f0da300ec5f02389efe405622006-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-a-09b422438d\" (UID: \"31d8f0da300ec5f02389efe405622006\") " pod="kube-system/kube-scheduler-ci-3510.3.8-a-09b422438d" Aug 13 00:50:54.966378 kubelet[2079]: E0813 00:50:54.966311 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-09b422438d?timeout=10s\": dial tcp 10.200.4.36:6443: connect: connection refused" interval="400ms" Aug 13 00:50:55.157152 kubelet[2079]: I0813 00:50:55.157114 2079 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-09b422438d" Aug 13 00:50:55.157676 kubelet[2079]: E0813 00:50:55.157484 2079 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.36:6443/api/v1/nodes\": dial tcp 10.200.4.36:6443: connect: connection refused" node="ci-3510.3.8-a-09b422438d" Aug 13 00:50:55.261729 env[1434]: time="2025-08-13T00:50:55.261032943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-a-09b422438d,Uid:cb483c91439925d63b840591535a5273,Namespace:kube-system,Attempt:0,}" Aug 13 00:50:55.269115 env[1434]: time="2025-08-13T00:50:55.269073936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-a-09b422438d,Uid:2ea01a1d5633c7d7a3f7408b4afc52a8,Namespace:kube-system,Attempt:0,}" Aug 13 00:50:55.269452 env[1434]: time="2025-08-13T00:50:55.269421648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-a-09b422438d,Uid:31d8f0da300ec5f02389efe405622006,Namespace:kube-system,Attempt:0,}" Aug 13 00:50:55.367653 kubelet[2079]: E0813 00:50:55.367600 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-09b422438d?timeout=10s\": dial tcp 10.200.4.36:6443: connect: connection refused" interval="800ms" Aug 13 00:50:55.560091 kubelet[2079]: I0813 00:50:55.560058 2079 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-09b422438d" Aug 13 00:50:55.560504 kubelet[2079]: E0813 00:50:55.560471 2079 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.36:6443/api/v1/nodes\": dial tcp 10.200.4.36:6443: connect: connection refused" node="ci-3510.3.8-a-09b422438d" Aug 13 00:50:55.709098 kubelet[2079]: W0813 00:50:55.709034 2079 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-09b422438d&limit=500&resourceVersion=0": dial tcp 10.200.4.36:6443: connect: connection refused Aug 13 00:50:55.709265 kubelet[2079]: E0813 00:50:55.709108 2079 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-09b422438d&limit=500&resourceVersion=0\": dial tcp 10.200.4.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:50:56.169019 kubelet[2079]: E0813 00:50:56.168948 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-09b422438d?timeout=10s\": dial tcp 10.200.4.36:6443: connect: connection refused" interval="1.6s" Aug 13 00:50:56.173524 kubelet[2079]: W0813 00:50:56.173459 2079 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.36:6443: connect: connection refused Aug 13 00:50:56.173661 kubelet[2079]: E0813 00:50:56.173539 2079 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:50:56.197044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3298632073.mount: Deactivated successfully. Aug 13 00:50:56.226897 kubelet[2079]: W0813 00:50:56.226836 2079 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.36:6443: connect: connection refused Aug 13 00:50:56.227071 kubelet[2079]: E0813 00:50:56.226906 2079 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:50:56.270051 kubelet[2079]: W0813 00:50:56.269971 2079 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.36:6443: connect: connection refused Aug 13 00:50:56.270189 kubelet[2079]: E0813 00:50:56.270065 2079 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:50:56.326552 env[1434]: time="2025-08-13T00:50:56.326498938Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:56.335047 env[1434]: time="2025-08-13T00:50:56.335008739Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:56.341998 env[1434]: time="2025-08-13T00:50:56.341953586Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:56.348702 env[1434]: time="2025-08-13T00:50:56.348670724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:56.362077 kubelet[2079]: I0813 00:50:56.362056 2079 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-09b422438d" Aug 13 00:50:56.362407 kubelet[2079]: E0813 00:50:56.362377 2079 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.36:6443/api/v1/nodes\": dial tcp 10.200.4.36:6443: connect: connection refused" node="ci-3510.3.8-a-09b422438d" Aug 13 00:50:56.479513 env[1434]: time="2025-08-13T00:50:56.479357856Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:56.488055 env[1434]: time="2025-08-13T00:50:56.488012363Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:56.496559 env[1434]: time="2025-08-13T00:50:56.496518365Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:56.499739 env[1434]: time="2025-08-13T00:50:56.499707278Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:56.502818 env[1434]: time="2025-08-13T00:50:56.502789587Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:56.511438 env[1434]: time="2025-08-13T00:50:56.511396692Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:56.733214 env[1434]: time="2025-08-13T00:50:56.732178918Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:56.760790 kubelet[2079]: E0813 00:50:56.760746 2079 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.36:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:50:57.058699 kubelet[2079]: E0813 00:50:57.058596 2079 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.36:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-a-09b422438d.185b2d4522028a49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-a-09b422438d,UID:ci-3510.3.8-a-09b422438d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-a-09b422438d,},FirstTimestamp:2025-08-13 00:50:54.737525321 +0000 UTC m=+1.394812827,LastTimestamp:2025-08-13 00:50:54.737525321 +0000 UTC m=+1.394812827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-a-09b422438d,}" Aug 13 00:50:57.641733 env[1434]: time="2025-08-13T00:50:57.641676775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:50:57.707624 env[1434]: time="2025-08-13T00:50:57.688916705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:50:57.707624 env[1434]: time="2025-08-13T00:50:57.688973107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:50:57.707624 env[1434]: time="2025-08-13T00:50:57.689016809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:50:57.707624 env[1434]: time="2025-08-13T00:50:57.689157514Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d9e8d70a961b1a5bd9c848cb3a0195e0a8d8d8bc02a935084ce06f7a8ba5a5f pid=2120 runtime=io.containerd.runc.v2 Aug 13 00:50:57.712570 systemd[1]: Started cri-containerd-5d9e8d70a961b1a5bd9c848cb3a0195e0a8d8d8bc02a935084ce06f7a8ba5a5f.scope. Aug 13 00:50:57.728479 env[1434]: time="2025-08-13T00:50:57.728413969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:50:57.729927 env[1434]: time="2025-08-13T00:50:57.729069691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:50:57.729927 env[1434]: time="2025-08-13T00:50:57.729093992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:50:57.736891 env[1434]: time="2025-08-13T00:50:57.736829659Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a7249df274da1d71a8a41951643bc34e4992f765112ca2206f49255e647ba31 pid=2154 runtime=io.containerd.runc.v2 Aug 13 00:50:57.754072 env[1434]: time="2025-08-13T00:50:57.751360961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:50:57.754072 env[1434]: time="2025-08-13T00:50:57.751444964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:50:57.754072 env[1434]: time="2025-08-13T00:50:57.751471565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:50:57.754072 env[1434]: time="2025-08-13T00:50:57.751607870Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/befba7c865f58abe74ff3a0840825b13a37a6b5aa2e0886bd8a74486d7922b67 pid=2182 runtime=io.containerd.runc.v2 Aug 13 00:50:57.760144 systemd[1]: Started cri-containerd-2a7249df274da1d71a8a41951643bc34e4992f765112ca2206f49255e647ba31.scope. Aug 13 00:50:57.769758 kubelet[2079]: E0813 00:50:57.769684 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-09b422438d?timeout=10s\": dial tcp 10.200.4.36:6443: connect: connection refused" interval="3.2s" Aug 13 00:50:57.784784 systemd[1]: Started cri-containerd-befba7c865f58abe74ff3a0840825b13a37a6b5aa2e0886bd8a74486d7922b67.scope. Aug 13 00:50:57.838361 env[1434]: time="2025-08-13T00:50:57.838318963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-a-09b422438d,Uid:2ea01a1d5633c7d7a3f7408b4afc52a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d9e8d70a961b1a5bd9c848cb3a0195e0a8d8d8bc02a935084ce06f7a8ba5a5f\"" Aug 13 00:50:57.843303 env[1434]: time="2025-08-13T00:50:57.843263733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-a-09b422438d,Uid:cb483c91439925d63b840591535a5273,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a7249df274da1d71a8a41951643bc34e4992f765112ca2206f49255e647ba31\"" Aug 13 00:50:57.845062 env[1434]: time="2025-08-13T00:50:57.845026794Z" level=info msg="CreateContainer within sandbox \"5d9e8d70a961b1a5bd9c848cb3a0195e0a8d8d8bc02a935084ce06f7a8ba5a5f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:50:57.846859 env[1434]: time="2025-08-13T00:50:57.846829257Z" level=info msg="CreateContainer within sandbox \"2a7249df274da1d71a8a41951643bc34e4992f765112ca2206f49255e647ba31\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:50:57.864289 env[1434]: time="2025-08-13T00:50:57.864236057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-a-09b422438d,Uid:31d8f0da300ec5f02389efe405622006,Namespace:kube-system,Attempt:0,} returns sandbox id \"befba7c865f58abe74ff3a0840825b13a37a6b5aa2e0886bd8a74486d7922b67\"" Aug 13 00:50:57.867137 env[1434]: time="2025-08-13T00:50:57.867091256Z" level=info msg="CreateContainer within sandbox \"befba7c865f58abe74ff3a0840825b13a37a6b5aa2e0886bd8a74486d7922b67\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:50:57.895165 env[1434]: time="2025-08-13T00:50:57.894330696Z" level=info msg="CreateContainer within sandbox \"5d9e8d70a961b1a5bd9c848cb3a0195e0a8d8d8bc02a935084ce06f7a8ba5a5f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b0cc23bf78d13d516fdd7e023cfe344690669daf8e2ccf696ff1b55b33326a3f\"" Aug 13 00:50:57.895296 env[1434]: time="2025-08-13T00:50:57.895261028Z" level=info msg="StartContainer for \"b0cc23bf78d13d516fdd7e023cfe344690669daf8e2ccf696ff1b55b33326a3f\"" Aug 13 00:50:57.905468 env[1434]: time="2025-08-13T00:50:57.905425479Z" level=info msg="CreateContainer within sandbox \"2a7249df274da1d71a8a41951643bc34e4992f765112ca2206f49255e647ba31\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f321043fbfd57152f1b31adcf63262e1662a70d3da413ad0f187ed1a8af9ab4a\"" Aug 13 00:50:57.905894 env[1434]: time="2025-08-13T00:50:57.905859894Z" level=info msg="StartContainer for \"f321043fbfd57152f1b31adcf63262e1662a70d3da413ad0f187ed1a8af9ab4a\"" Aug 13 00:50:57.912115 systemd[1]: Started cri-containerd-b0cc23bf78d13d516fdd7e023cfe344690669daf8e2ccf696ff1b55b33326a3f.scope. Aug 13 00:50:57.936027 systemd[1]: Started cri-containerd-f321043fbfd57152f1b31adcf63262e1662a70d3da413ad0f187ed1a8af9ab4a.scope. Aug 13 00:50:57.941308 env[1434]: time="2025-08-13T00:50:57.941256116Z" level=info msg="CreateContainer within sandbox \"befba7c865f58abe74ff3a0840825b13a37a6b5aa2e0886bd8a74486d7922b67\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dd5fe92aaff0a97c74d72dbe181e353cdfd23de0b007ebf225b7c66ec3aea26f\"" Aug 13 00:50:57.941882 env[1434]: time="2025-08-13T00:50:57.941847137Z" level=info msg="StartContainer for \"dd5fe92aaff0a97c74d72dbe181e353cdfd23de0b007ebf225b7c66ec3aea26f\"" Aug 13 00:50:57.965879 kubelet[2079]: I0813 00:50:57.965676 2079 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-09b422438d" Aug 13 00:50:57.967459 kubelet[2079]: E0813 00:50:57.966129 2079 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.36:6443/api/v1/nodes\": dial tcp 10.200.4.36:6443: connect: connection refused" node="ci-3510.3.8-a-09b422438d" Aug 13 00:50:57.978284 systemd[1]: Started cri-containerd-dd5fe92aaff0a97c74d72dbe181e353cdfd23de0b007ebf225b7c66ec3aea26f.scope. Aug 13 00:50:58.018202 env[1434]: time="2025-08-13T00:50:58.018151155Z" level=info msg="StartContainer for \"b0cc23bf78d13d516fdd7e023cfe344690669daf8e2ccf696ff1b55b33326a3f\" returns successfully" Aug 13 00:50:58.031714 env[1434]: time="2025-08-13T00:50:58.031663710Z" level=info msg="StartContainer for \"f321043fbfd57152f1b31adcf63262e1662a70d3da413ad0f187ed1a8af9ab4a\" returns successfully" Aug 13 00:50:58.131596 env[1434]: time="2025-08-13T00:50:58.131550068Z" level=info msg="StartContainer for \"dd5fe92aaff0a97c74d72dbe181e353cdfd23de0b007ebf225b7c66ec3aea26f\" returns successfully" Aug 13 00:51:00.756070 kubelet[2079]: I0813 00:51:00.755955 2079 apiserver.go:52] "Watching apiserver" Aug 13 00:51:00.757271 kubelet[2079]: E0813 00:51:00.757237 2079 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.8-a-09b422438d" not found Aug 13 00:51:00.763898 kubelet[2079]: I0813 00:51:00.763874 2079 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:51:00.973593 kubelet[2079]: E0813 00:51:00.973559 2079 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-a-09b422438d\" not found" node="ci-3510.3.8-a-09b422438d" Aug 13 00:51:01.100207 kubelet[2079]: E0813 00:51:01.100168 2079 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.8-a-09b422438d" not found Aug 13 00:51:01.168067 kubelet[2079]: I0813 00:51:01.168030 2079 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-09b422438d" Aug 13 00:51:01.178462 kubelet[2079]: I0813 00:51:01.178435 2079 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-a-09b422438d" Aug 13 00:51:03.069854 systemd[1]: Reloading. Aug 13 00:51:03.159200 /usr/lib/systemd/system-generators/torcx-generator[2371]: time="2025-08-13T00:51:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:51:03.159240 /usr/lib/systemd/system-generators/torcx-generator[2371]: time="2025-08-13T00:51:03Z" level=info msg="torcx already run" Aug 13 00:51:03.256336 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:51:03.256357 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:51:03.272766 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:51:03.393498 systemd[1]: Stopping kubelet.service... Aug 13 00:51:03.411373 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:51:03.411579 systemd[1]: Stopped kubelet.service. Aug 13 00:51:03.411635 systemd[1]: kubelet.service: Consumed 1.053s CPU time. Aug 13 00:51:03.413593 systemd[1]: Starting kubelet.service... Aug 13 00:51:04.334380 systemd[1]: Started kubelet.service. Aug 13 00:51:04.388594 kubelet[2438]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:51:04.388927 kubelet[2438]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:51:04.388970 kubelet[2438]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:51:04.389124 kubelet[2438]: I0813 00:51:04.389093 2438 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:51:04.397948 kubelet[2438]: I0813 00:51:04.397910 2438 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:51:04.397948 kubelet[2438]: I0813 00:51:04.397935 2438 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:51:04.398224 kubelet[2438]: I0813 00:51:04.398202 2438 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:51:04.399742 kubelet[2438]: I0813 00:51:04.399714 2438 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:51:04.402410 kubelet[2438]: I0813 00:51:04.402199 2438 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:51:04.416021 kubelet[2438]: E0813 00:51:04.415348 2438 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:51:04.416021 kubelet[2438]: I0813 00:51:04.415384 2438 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:51:04.419478 kubelet[2438]: I0813 00:51:04.419443 2438 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:51:04.419627 kubelet[2438]: I0813 00:51:04.419609 2438 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:51:04.419832 kubelet[2438]: I0813 00:51:04.419793 2438 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:51:04.420061 kubelet[2438]: I0813 00:51:04.419834 2438 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-a-09b422438d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:51:04.420233 kubelet[2438]: I0813 00:51:04.420076 2438 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:51:04.420233 kubelet[2438]: I0813 00:51:04.420091 2438 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:51:04.420233 kubelet[2438]: I0813 00:51:04.420125 2438 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:51:04.420378 kubelet[2438]: I0813 00:51:04.420255 2438 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:51:04.420378 kubelet[2438]: I0813 00:51:04.420271 2438 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:51:04.421227 kubelet[2438]: I0813 00:51:04.421207 2438 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:51:04.434225 kubelet[2438]: I0813 00:51:04.434165 2438 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:51:04.439247 kubelet[2438]: I0813 00:51:04.439225 2438 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:51:04.440090 kubelet[2438]: I0813 00:51:04.440073 2438 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:51:04.440963 kubelet[2438]: I0813 00:51:04.440948 2438 server.go:1274] "Started kubelet" Aug 13 00:51:04.447338 kubelet[2438]: I0813 00:51:04.443351 2438 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:51:04.456610 kubelet[2438]: E0813 00:51:04.450316 2438 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:51:04.456610 kubelet[2438]: I0813 00:51:04.447201 2438 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:51:04.456610 kubelet[2438]: I0813 00:51:04.451438 2438 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:51:04.456610 kubelet[2438]: I0813 00:51:04.447250 2438 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:51:04.456610 kubelet[2438]: I0813 00:51:04.453007 2438 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:51:04.456610 kubelet[2438]: I0813 00:51:04.454107 2438 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:51:04.456610 kubelet[2438]: I0813 00:51:04.444591 2438 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:51:04.456610 kubelet[2438]: I0813 00:51:04.454356 2438 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:51:04.456610 kubelet[2438]: I0813 00:51:04.454467 2438 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:51:04.460221 kubelet[2438]: I0813 00:51:04.459601 2438 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:51:04.462590 kubelet[2438]: I0813 00:51:04.462564 2438 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:51:04.462590 kubelet[2438]: I0813 00:51:04.462583 2438 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:51:04.470580 kubelet[2438]: I0813 00:51:04.470557 2438 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:51:04.471567 kubelet[2438]: I0813 00:51:04.471551 2438 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:51:04.471664 kubelet[2438]: I0813 00:51:04.471656 2438 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:51:04.471722 kubelet[2438]: I0813 00:51:04.471715 2438 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:51:04.471798 kubelet[2438]: E0813 00:51:04.471786 2438 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:51:04.512163 kubelet[2438]: I0813 00:51:04.512140 2438 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:51:04.512383 kubelet[2438]: I0813 00:51:04.512372 2438 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:51:04.512473 kubelet[2438]: I0813 00:51:04.512466 2438 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:51:04.512687 kubelet[2438]: I0813 00:51:04.512663 2438 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:51:04.512785 kubelet[2438]: I0813 00:51:04.512754 2438 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:51:04.512831 kubelet[2438]: I0813 00:51:04.512826 2438 policy_none.go:49] "None policy: Start" Aug 13 00:51:04.513647 kubelet[2438]: I0813 00:51:04.513626 2438 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:51:04.513647 kubelet[2438]: I0813 00:51:04.513651 2438 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:51:04.513829 kubelet[2438]: I0813 00:51:04.513811 2438 state_mem.go:75] "Updated machine memory state" Aug 13 00:51:04.519785 kubelet[2438]: I0813 00:51:04.519756 2438 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:51:04.519941 kubelet[2438]: I0813 00:51:04.519923 2438 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:51:04.520034 kubelet[2438]: I0813 00:51:04.519939 2438 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:51:04.522500 kubelet[2438]: I0813 00:51:04.522472 2438 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:51:04.583968 kubelet[2438]: W0813 00:51:04.583920 2438 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:51:04.589984 kubelet[2438]: W0813 00:51:04.588410 2438 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:51:04.590184 kubelet[2438]: W0813 00:51:04.588490 2438 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:51:04.627200 kubelet[2438]: I0813 00:51:04.627155 2438 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-09b422438d" Aug 13 00:51:04.645444 kubelet[2438]: I0813 00:51:04.645415 2438 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-a-09b422438d" Aug 13 00:51:04.645671 kubelet[2438]: I0813 00:51:04.645655 2438 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-a-09b422438d" Aug 13 00:51:04.655600 kubelet[2438]: I0813 00:51:04.655575 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ea01a1d5633c7d7a3f7408b4afc52a8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-a-09b422438d\" (UID: \"2ea01a1d5633c7d7a3f7408b4afc52a8\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-09b422438d" Aug 13 00:51:04.655812 kubelet[2438]: I0813 00:51:04.655792 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/31d8f0da300ec5f02389efe405622006-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-a-09b422438d\" (UID: \"31d8f0da300ec5f02389efe405622006\") " pod="kube-system/kube-scheduler-ci-3510.3.8-a-09b422438d" Aug 13 00:51:04.656284 kubelet[2438]: I0813 00:51:04.656265 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cb483c91439925d63b840591535a5273-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-a-09b422438d\" (UID: \"cb483c91439925d63b840591535a5273\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-09b422438d" Aug 13 00:51:04.656421 kubelet[2438]: I0813 00:51:04.656406 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cb483c91439925d63b840591535a5273-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-a-09b422438d\" (UID: \"cb483c91439925d63b840591535a5273\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-09b422438d" Aug 13 00:51:04.656959 kubelet[2438]: I0813 00:51:04.656938 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ea01a1d5633c7d7a3f7408b4afc52a8-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-09b422438d\" (UID: \"2ea01a1d5633c7d7a3f7408b4afc52a8\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-09b422438d" Aug 13 00:51:04.657126 kubelet[2438]: I0813 00:51:04.657108 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2ea01a1d5633c7d7a3f7408b4afc52a8-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-a-09b422438d\" (UID: \"2ea01a1d5633c7d7a3f7408b4afc52a8\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-09b422438d" Aug 13 00:51:04.657238 kubelet[2438]: I0813 00:51:04.657222 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ea01a1d5633c7d7a3f7408b4afc52a8-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-a-09b422438d\" (UID: \"2ea01a1d5633c7d7a3f7408b4afc52a8\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-09b422438d" Aug 13 00:51:04.657359 kubelet[2438]: I0813 00:51:04.657335 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cb483c91439925d63b840591535a5273-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-a-09b422438d\" (UID: \"cb483c91439925d63b840591535a5273\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-09b422438d" Aug 13 00:51:04.657471 kubelet[2438]: I0813 00:51:04.657454 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ea01a1d5633c7d7a3f7408b4afc52a8-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-09b422438d\" (UID: \"2ea01a1d5633c7d7a3f7408b4afc52a8\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-09b422438d" Aug 13 00:51:05.174788 sudo[2469]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:51:05.175102 sudo[2469]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 00:51:05.436139 kubelet[2438]: I0813 00:51:05.436027 2438 apiserver.go:52] "Watching apiserver" Aug 13 00:51:05.454932 kubelet[2438]: I0813 00:51:05.454891 2438 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:51:05.512627 kubelet[2438]: W0813 00:51:05.512604 2438 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:51:05.512899 kubelet[2438]: E0813 00:51:05.512858 2438 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-a-09b422438d\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-a-09b422438d" Aug 13 00:51:05.514176 kubelet[2438]: W0813 00:51:05.514159 2438 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:51:05.514354 kubelet[2438]: E0813 00:51:05.514338 2438 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.8-a-09b422438d\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-09b422438d" Aug 13 00:51:05.546253 kubelet[2438]: I0813 00:51:05.546182 2438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-a-09b422438d" podStartSLOduration=1.546151406 podStartE2EDuration="1.546151406s" podCreationTimestamp="2025-08-13 00:51:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:51:05.535442505 +0000 UTC m=+1.195347836" watchObservedRunningTime="2025-08-13 00:51:05.546151406 +0000 UTC m=+1.206056637" Aug 13 00:51:05.557003 kubelet[2438]: I0813 00:51:05.556949 2438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-a-09b422438d" podStartSLOduration=1.556920808 podStartE2EDuration="1.556920808s" podCreationTimestamp="2025-08-13 00:51:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:51:05.556456895 +0000 UTC m=+1.216362126" watchObservedRunningTime="2025-08-13 00:51:05.556920808 +0000 UTC m=+1.216826139" Aug 13 00:51:05.557795 kubelet[2438]: I0813 00:51:05.557753 2438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-09b422438d" podStartSLOduration=1.5577435309999998 podStartE2EDuration="1.557743531s" podCreationTimestamp="2025-08-13 00:51:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:51:05.546829125 +0000 UTC m=+1.206734456" watchObservedRunningTime="2025-08-13 00:51:05.557743531 +0000 UTC m=+1.217648862" Aug 13 00:51:05.720969 sudo[2469]: pam_unix(sudo:session): session closed for user root Aug 13 00:51:07.442353 kubelet[2438]: I0813 00:51:07.442321 2438 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:51:07.443139 env[1434]: time="2025-08-13T00:51:07.443105259Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:51:07.443609 kubelet[2438]: I0813 00:51:07.443590 2438 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:51:07.512434 sudo[1718]: pam_unix(sudo:session): session closed for user root Aug 13 00:51:07.606024 sshd[1715]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:07.609081 systemd[1]: sshd@4-10.200.4.36:22-10.200.16.10:60752.service: Deactivated successfully. Aug 13 00:51:07.609976 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:51:07.610167 systemd[1]: session-7.scope: Consumed 5.430s CPU time. Aug 13 00:51:07.610709 systemd-logind[1420]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:51:07.611583 systemd-logind[1420]: Removed session 7. Aug 13 00:51:08.275799 systemd[1]: Created slice kubepods-besteffort-pod7c1f9dc9_6508_4f58_bef5_510ab342d756.slice. Aug 13 00:51:08.281489 kubelet[2438]: I0813 00:51:08.281459 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7c1f9dc9-6508-4f58-bef5-510ab342d756-kube-proxy\") pod \"kube-proxy-85qdd\" (UID: \"7c1f9dc9-6508-4f58-bef5-510ab342d756\") " pod="kube-system/kube-proxy-85qdd" Aug 13 00:51:08.281748 kubelet[2438]: I0813 00:51:08.281727 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c1f9dc9-6508-4f58-bef5-510ab342d756-xtables-lock\") pod \"kube-proxy-85qdd\" (UID: \"7c1f9dc9-6508-4f58-bef5-510ab342d756\") " pod="kube-system/kube-proxy-85qdd" Aug 13 00:51:08.281882 kubelet[2438]: I0813 00:51:08.281861 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c1f9dc9-6508-4f58-bef5-510ab342d756-lib-modules\") pod \"kube-proxy-85qdd\" (UID: \"7c1f9dc9-6508-4f58-bef5-510ab342d756\") " pod="kube-system/kube-proxy-85qdd" Aug 13 00:51:08.282026 kubelet[2438]: I0813 00:51:08.282001 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2p6s\" (UniqueName: \"kubernetes.io/projected/7c1f9dc9-6508-4f58-bef5-510ab342d756-kube-api-access-k2p6s\") pod \"kube-proxy-85qdd\" (UID: \"7c1f9dc9-6508-4f58-bef5-510ab342d756\") " pod="kube-system/kube-proxy-85qdd" Aug 13 00:51:08.290732 systemd[1]: Created slice kubepods-burstable-pod0aad5066_d6e7_43d3_a77d_c2a5b1d926a3.slice. Aug 13 00:51:08.382948 kubelet[2438]: I0813 00:51:08.382920 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-hubble-tls\") pod \"cilium-9vs86\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " pod="kube-system/cilium-9vs86" Aug 13 00:51:08.383179 kubelet[2438]: I0813 00:51:08.383162 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-etc-cni-netd\") pod \"cilium-9vs86\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " pod="kube-system/cilium-9vs86" Aug 13 00:51:08.383288 kubelet[2438]: I0813 00:51:08.383274 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-xtables-lock\") pod \"cilium-9vs86\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " pod="kube-system/cilium-9vs86" Aug 13 00:51:08.383396 kubelet[2438]: I0813 00:51:08.383385 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-cilium-run\") pod \"cilium-9vs86\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " pod="kube-system/cilium-9vs86" Aug 13 00:51:08.383463 kubelet[2438]: I0813 00:51:08.383454 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-cilium-cgroup\") pod \"cilium-9vs86\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " pod="kube-system/cilium-9vs86" Aug 13 00:51:08.383535 kubelet[2438]: I0813 00:51:08.383526 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-cni-path\") pod \"cilium-9vs86\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " pod="kube-system/cilium-9vs86" Aug 13 00:51:08.383604 kubelet[2438]: I0813 00:51:08.383594 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-host-proc-sys-net\") pod \"cilium-9vs86\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " pod="kube-system/cilium-9vs86" Aug 13 00:51:08.383665 kubelet[2438]: I0813 00:51:08.383653 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-bpf-maps\") pod \"cilium-9vs86\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " pod="kube-system/cilium-9vs86" Aug 13 00:51:08.383723 kubelet[2438]: I0813 00:51:08.383715 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-hostproc\") pod \"cilium-9vs86\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " pod="kube-system/cilium-9vs86" Aug 13 00:51:08.383785 kubelet[2438]: I0813 00:51:08.383776 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-cilium-config-path\") pod \"cilium-9vs86\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " pod="kube-system/cilium-9vs86" Aug 13 00:51:08.383854 kubelet[2438]: I0813 00:51:08.383842 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2gnj\" (UniqueName: \"kubernetes.io/projected/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-kube-api-access-z2gnj\") pod \"cilium-9vs86\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " pod="kube-system/cilium-9vs86" Aug 13 00:51:08.383923 kubelet[2438]: I0813 00:51:08.383903 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-host-proc-sys-kernel\") pod \"cilium-9vs86\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " pod="kube-system/cilium-9vs86" Aug 13 00:51:08.384015 kubelet[2438]: I0813 00:51:08.383985 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-lib-modules\") pod \"cilium-9vs86\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " pod="kube-system/cilium-9vs86" Aug 13 00:51:08.384090 kubelet[2438]: I0813 00:51:08.384077 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-clustermesh-secrets\") pod \"cilium-9vs86\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " pod="kube-system/cilium-9vs86" Aug 13 00:51:08.388968 kubelet[2438]: I0813 00:51:08.388930 2438 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:51:08.553198 systemd[1]: Created slice kubepods-besteffort-pod10c3e8b7_cc8a_406f_bca5_8f634ceadf5d.slice. Aug 13 00:51:08.585258 env[1434]: time="2025-08-13T00:51:08.585208276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-85qdd,Uid:7c1f9dc9-6508-4f58-bef5-510ab342d756,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:08.586180 kubelet[2438]: I0813 00:51:08.586141 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4pv\" (UniqueName: \"kubernetes.io/projected/10c3e8b7-cc8a-406f-bca5-8f634ceadf5d-kube-api-access-6x4pv\") pod \"cilium-operator-5d85765b45-tzdvf\" (UID: \"10c3e8b7-cc8a-406f-bca5-8f634ceadf5d\") " pod="kube-system/cilium-operator-5d85765b45-tzdvf" Aug 13 00:51:08.586525 kubelet[2438]: I0813 00:51:08.586197 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10c3e8b7-cc8a-406f-bca5-8f634ceadf5d-cilium-config-path\") pod \"cilium-operator-5d85765b45-tzdvf\" (UID: \"10c3e8b7-cc8a-406f-bca5-8f634ceadf5d\") " pod="kube-system/cilium-operator-5d85765b45-tzdvf" Aug 13 00:51:08.600806 env[1434]: time="2025-08-13T00:51:08.600761581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9vs86,Uid:0aad5066-d6e7-43d3-a77d-c2a5b1d926a3,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:08.657600 env[1434]: time="2025-08-13T00:51:08.646687478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:51:08.657600 env[1434]: time="2025-08-13T00:51:08.646718378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:51:08.657600 env[1434]: time="2025-08-13T00:51:08.646732579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:51:08.657600 env[1434]: time="2025-08-13T00:51:08.646888783Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/12b91b1ea2e611d0ec5f7e262ad87be8fe0c0fbacee38ac34e9ff0a288150677 pid=2517 runtime=io.containerd.runc.v2 Aug 13 00:51:08.657600 env[1434]: time="2025-08-13T00:51:08.654782988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:51:08.657600 env[1434]: time="2025-08-13T00:51:08.654816889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:51:08.657600 env[1434]: time="2025-08-13T00:51:08.654826490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:51:08.657600 env[1434]: time="2025-08-13T00:51:08.654974993Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f pid=2537 runtime=io.containerd.runc.v2 Aug 13 00:51:08.679767 systemd[1]: Started cri-containerd-12b91b1ea2e611d0ec5f7e262ad87be8fe0c0fbacee38ac34e9ff0a288150677.scope. Aug 13 00:51:08.686003 systemd[1]: Started cri-containerd-5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f.scope. Aug 13 00:51:08.723379 env[1434]: time="2025-08-13T00:51:08.723323774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9vs86,Uid:0aad5066-d6e7-43d3-a77d-c2a5b1d926a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\"" Aug 13 00:51:08.726712 env[1434]: time="2025-08-13T00:51:08.726669161Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:51:08.740585 env[1434]: time="2025-08-13T00:51:08.740542622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-85qdd,Uid:7c1f9dc9-6508-4f58-bef5-510ab342d756,Namespace:kube-system,Attempt:0,} returns sandbox id \"12b91b1ea2e611d0ec5f7e262ad87be8fe0c0fbacee38ac34e9ff0a288150677\"" Aug 13 00:51:08.744549 env[1434]: time="2025-08-13T00:51:08.744523126Z" level=info msg="CreateContainer within sandbox \"12b91b1ea2e611d0ec5f7e262ad87be8fe0c0fbacee38ac34e9ff0a288150677\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:51:08.785245 env[1434]: time="2025-08-13T00:51:08.785192486Z" level=info msg="CreateContainer within sandbox \"12b91b1ea2e611d0ec5f7e262ad87be8fe0c0fbacee38ac34e9ff0a288150677\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3e1eed9fae89dce9c0aba46e14f4c19da85780b666865f09a4f446dbba51b2fb\"" Aug 13 00:51:08.787345 env[1434]: time="2025-08-13T00:51:08.786148510Z" level=info msg="StartContainer for \"3e1eed9fae89dce9c0aba46e14f4c19da85780b666865f09a4f446dbba51b2fb\"" Aug 13 00:51:08.806807 systemd[1]: Started cri-containerd-3e1eed9fae89dce9c0aba46e14f4c19da85780b666865f09a4f446dbba51b2fb.scope. Aug 13 00:51:08.854821 env[1434]: time="2025-08-13T00:51:08.854765798Z" level=info msg="StartContainer for \"3e1eed9fae89dce9c0aba46e14f4c19da85780b666865f09a4f446dbba51b2fb\" returns successfully" Aug 13 00:51:08.857610 env[1434]: time="2025-08-13T00:51:08.857560871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-tzdvf,Uid:10c3e8b7-cc8a-406f-bca5-8f634ceadf5d,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:08.911323 env[1434]: time="2025-08-13T00:51:08.911248869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:51:08.911509 env[1434]: time="2025-08-13T00:51:08.911331571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:51:08.911509 env[1434]: time="2025-08-13T00:51:08.911357972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:51:08.911630 env[1434]: time="2025-08-13T00:51:08.911504376Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa pid=2641 runtime=io.containerd.runc.v2 Aug 13 00:51:08.927581 systemd[1]: Started cri-containerd-c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa.scope. Aug 13 00:51:08.978040 env[1434]: time="2025-08-13T00:51:08.977981508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-tzdvf,Uid:10c3e8b7-cc8a-406f-bca5-8f634ceadf5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa\"" Aug 13 00:51:13.618102 kubelet[2438]: I0813 00:51:13.618032 2438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-85qdd" podStartSLOduration=5.618010405 podStartE2EDuration="5.618010405s" podCreationTimestamp="2025-08-13 00:51:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:51:09.523752094 +0000 UTC m=+5.183657325" watchObservedRunningTime="2025-08-13 00:51:13.618010405 +0000 UTC m=+9.277915736" Aug 13 00:51:14.077888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3434721846.mount: Deactivated successfully. Aug 13 00:51:16.827804 env[1434]: time="2025-08-13T00:51:16.827460261Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:16.834681 env[1434]: time="2025-08-13T00:51:16.834643215Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:16.838812 env[1434]: time="2025-08-13T00:51:16.838779004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:16.839308 env[1434]: time="2025-08-13T00:51:16.839277915Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:51:16.842005 env[1434]: time="2025-08-13T00:51:16.841588265Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:51:16.842740 env[1434]: time="2025-08-13T00:51:16.842708989Z" level=info msg="CreateContainer within sandbox \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:51:16.870508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount237177598.mount: Deactivated successfully. Aug 13 00:51:16.878435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1934104874.mount: Deactivated successfully. Aug 13 00:51:16.888080 env[1434]: time="2025-08-13T00:51:16.888044564Z" level=info msg="CreateContainer within sandbox \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d\"" Aug 13 00:51:16.888740 env[1434]: time="2025-08-13T00:51:16.888712478Z" level=info msg="StartContainer for \"75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d\"" Aug 13 00:51:16.916636 systemd[1]: Started cri-containerd-75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d.scope. Aug 13 00:51:16.951048 env[1434]: time="2025-08-13T00:51:16.950212500Z" level=info msg="StartContainer for \"75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d\" returns successfully" Aug 13 00:51:16.956467 systemd[1]: cri-containerd-75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d.scope: Deactivated successfully. Aug 13 00:51:17.867748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d-rootfs.mount: Deactivated successfully. Aug 13 00:51:20.701247 env[1434]: time="2025-08-13T00:51:20.701195777Z" level=info msg="shim disconnected" id=75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d Aug 13 00:51:20.701247 env[1434]: time="2025-08-13T00:51:20.701243278Z" level=warning msg="cleaning up after shim disconnected" id=75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d namespace=k8s.io Aug 13 00:51:20.701247 env[1434]: time="2025-08-13T00:51:20.701255078Z" level=info msg="cleaning up dead shim" Aug 13 00:51:20.710289 env[1434]: time="2025-08-13T00:51:20.710249355Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:51:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2850 runtime=io.containerd.runc.v2\n" Aug 13 00:51:21.553082 env[1434]: time="2025-08-13T00:51:21.553030760Z" level=info msg="CreateContainer within sandbox \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:51:21.692900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1905518623.mount: Deactivated successfully. Aug 13 00:51:21.767239 env[1434]: time="2025-08-13T00:51:21.767185371Z" level=info msg="CreateContainer within sandbox \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561\"" Aug 13 00:51:21.769915 env[1434]: time="2025-08-13T00:51:21.768094788Z" level=info msg="StartContainer for \"6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561\"" Aug 13 00:51:21.811943 systemd[1]: Started cri-containerd-6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561.scope. Aug 13 00:51:21.854304 env[1434]: time="2025-08-13T00:51:21.854258842Z" level=info msg="StartContainer for \"6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561\" returns successfully" Aug 13 00:51:21.865658 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:51:21.865953 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:51:21.866696 systemd[1]: Stopping systemd-sysctl.service... Aug 13 00:51:21.868913 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:51:21.877029 systemd[1]: cri-containerd-6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561.scope: Deactivated successfully. Aug 13 00:51:21.882124 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:51:21.929305 env[1434]: time="2025-08-13T00:51:21.929249382Z" level=info msg="shim disconnected" id=6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561 Aug 13 00:51:21.929305 env[1434]: time="2025-08-13T00:51:21.929307383Z" level=warning msg="cleaning up after shim disconnected" id=6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561 namespace=k8s.io Aug 13 00:51:21.929580 env[1434]: time="2025-08-13T00:51:21.929318483Z" level=info msg="cleaning up dead shim" Aug 13 00:51:21.957951 env[1434]: time="2025-08-13T00:51:21.957904232Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:51:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2916 runtime=io.containerd.runc.v2\n" Aug 13 00:51:22.485885 env[1434]: time="2025-08-13T00:51:22.485833163Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:22.491395 env[1434]: time="2025-08-13T00:51:22.491348367Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:22.498078 env[1434]: time="2025-08-13T00:51:22.498041792Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:22.498486 env[1434]: time="2025-08-13T00:51:22.498453000Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:51:22.501735 env[1434]: time="2025-08-13T00:51:22.501246753Z" level=info msg="CreateContainer within sandbox \"c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:51:22.527080 env[1434]: time="2025-08-13T00:51:22.527036137Z" level=info msg="CreateContainer within sandbox \"c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d\"" Aug 13 00:51:22.529348 env[1434]: time="2025-08-13T00:51:22.527697749Z" level=info msg="StartContainer for \"d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d\"" Aug 13 00:51:22.547075 systemd[1]: Started cri-containerd-d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d.scope. Aug 13 00:51:22.556689 env[1434]: time="2025-08-13T00:51:22.556650793Z" level=info msg="CreateContainer within sandbox \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:51:22.616133 env[1434]: time="2025-08-13T00:51:22.616080509Z" level=info msg="StartContainer for \"d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d\" returns successfully" Aug 13 00:51:22.616398 env[1434]: time="2025-08-13T00:51:22.616354614Z" level=info msg="CreateContainer within sandbox \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7\"" Aug 13 00:51:22.618193 env[1434]: time="2025-08-13T00:51:22.618161748Z" level=info msg="StartContainer for \"da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7\"" Aug 13 00:51:22.644461 systemd[1]: Started cri-containerd-da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7.scope. Aug 13 00:51:22.687412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561-rootfs.mount: Deactivated successfully. Aug 13 00:51:22.703587 env[1434]: time="2025-08-13T00:51:22.703538951Z" level=info msg="StartContainer for \"da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7\" returns successfully" Aug 13 00:51:22.709484 systemd[1]: cri-containerd-da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7.scope: Deactivated successfully. Aug 13 00:51:22.739342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7-rootfs.mount: Deactivated successfully. Aug 13 00:51:23.185570 env[1434]: time="2025-08-13T00:51:23.185513626Z" level=info msg="shim disconnected" id=da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7 Aug 13 00:51:23.186085 env[1434]: time="2025-08-13T00:51:23.185577328Z" level=warning msg="cleaning up after shim disconnected" id=da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7 namespace=k8s.io Aug 13 00:51:23.186085 env[1434]: time="2025-08-13T00:51:23.185589428Z" level=info msg="cleaning up dead shim" Aug 13 00:51:23.201660 env[1434]: time="2025-08-13T00:51:23.201609922Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:51:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3014 runtime=io.containerd.runc.v2\n" Aug 13 00:51:23.554717 env[1434]: time="2025-08-13T00:51:23.554594707Z" level=info msg="CreateContainer within sandbox \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:51:23.584486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3279704729.mount: Deactivated successfully. Aug 13 00:51:23.600030 env[1434]: time="2025-08-13T00:51:23.599972041Z" level=info msg="CreateContainer within sandbox \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3\"" Aug 13 00:51:23.600568 env[1434]: time="2025-08-13T00:51:23.600530451Z" level=info msg="StartContainer for \"15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3\"" Aug 13 00:51:23.615456 kubelet[2438]: I0813 00:51:23.615389 2438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-tzdvf" podStartSLOduration=2.095531756 podStartE2EDuration="15.615374224s" podCreationTimestamp="2025-08-13 00:51:08 +0000 UTC" firstStartedPulling="2025-08-13 00:51:08.979670452 +0000 UTC m=+4.639575683" lastFinishedPulling="2025-08-13 00:51:22.49951292 +0000 UTC m=+18.159418151" observedRunningTime="2025-08-13 00:51:23.570304196 +0000 UTC m=+19.230209427" watchObservedRunningTime="2025-08-13 00:51:23.615374224 +0000 UTC m=+19.275279555" Aug 13 00:51:23.633157 systemd[1]: Started cri-containerd-15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3.scope. Aug 13 00:51:23.659433 systemd[1]: cri-containerd-15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3.scope: Deactivated successfully. Aug 13 00:51:23.663181 env[1434]: time="2025-08-13T00:51:23.663140801Z" level=info msg="StartContainer for \"15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3\" returns successfully" Aug 13 00:51:23.685207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3-rootfs.mount: Deactivated successfully. Aug 13 00:51:23.691212 env[1434]: time="2025-08-13T00:51:23.691164216Z" level=info msg="shim disconnected" id=15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3 Aug 13 00:51:23.691362 env[1434]: time="2025-08-13T00:51:23.691212217Z" level=warning msg="cleaning up after shim disconnected" id=15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3 namespace=k8s.io Aug 13 00:51:23.691362 env[1434]: time="2025-08-13T00:51:23.691224017Z" level=info msg="cleaning up dead shim" Aug 13 00:51:23.699652 env[1434]: time="2025-08-13T00:51:23.699616971Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:51:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3068 runtime=io.containerd.runc.v2\n" Aug 13 00:51:24.564021 env[1434]: time="2025-08-13T00:51:24.561405484Z" level=info msg="CreateContainer within sandbox \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:51:24.610218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3634713427.mount: Deactivated successfully. Aug 13 00:51:24.623354 env[1434]: time="2025-08-13T00:51:24.623312997Z" level=info msg="CreateContainer within sandbox \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33\"" Aug 13 00:51:24.625271 env[1434]: time="2025-08-13T00:51:24.625244231Z" level=info msg="StartContainer for \"bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33\"" Aug 13 00:51:24.646894 systemd[1]: Started cri-containerd-bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33.scope. Aug 13 00:51:24.690828 env[1434]: time="2025-08-13T00:51:24.690778310Z" level=info msg="StartContainer for \"bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33\" returns successfully" Aug 13 00:51:24.730026 systemd[1]: run-containerd-runc-k8s.io-bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33-runc.dSbLN1.mount: Deactivated successfully. Aug 13 00:51:24.839411 kubelet[2438]: I0813 00:51:24.838143 2438 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:51:24.878620 systemd[1]: Created slice kubepods-burstable-poda4eea6c1_9231_4db6_be7b_1a184a0a6309.slice. Aug 13 00:51:24.895733 systemd[1]: Created slice kubepods-burstable-poda031696d_4496_4a25_99be_9c5af74d0bff.slice. Aug 13 00:51:24.998469 kubelet[2438]: I0813 00:51:24.998426 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s96h\" (UniqueName: \"kubernetes.io/projected/a4eea6c1-9231-4db6-be7b-1a184a0a6309-kube-api-access-8s96h\") pod \"coredns-7c65d6cfc9-w6v6c\" (UID: \"a4eea6c1-9231-4db6-be7b-1a184a0a6309\") " pod="kube-system/coredns-7c65d6cfc9-w6v6c" Aug 13 00:51:24.998742 kubelet[2438]: I0813 00:51:24.998723 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t76nk\" (UniqueName: \"kubernetes.io/projected/a031696d-4496-4a25-99be-9c5af74d0bff-kube-api-access-t76nk\") pod \"coredns-7c65d6cfc9-8rjlq\" (UID: \"a031696d-4496-4a25-99be-9c5af74d0bff\") " pod="kube-system/coredns-7c65d6cfc9-8rjlq" Aug 13 00:51:24.998869 kubelet[2438]: I0813 00:51:24.998853 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a031696d-4496-4a25-99be-9c5af74d0bff-config-volume\") pod \"coredns-7c65d6cfc9-8rjlq\" (UID: \"a031696d-4496-4a25-99be-9c5af74d0bff\") " pod="kube-system/coredns-7c65d6cfc9-8rjlq" Aug 13 00:51:24.999030 kubelet[2438]: I0813 00:51:24.998973 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4eea6c1-9231-4db6-be7b-1a184a0a6309-config-volume\") pod \"coredns-7c65d6cfc9-w6v6c\" (UID: \"a4eea6c1-9231-4db6-be7b-1a184a0a6309\") " pod="kube-system/coredns-7c65d6cfc9-w6v6c" Aug 13 00:51:25.185454 env[1434]: time="2025-08-13T00:51:25.185349931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w6v6c,Uid:a4eea6c1-9231-4db6-be7b-1a184a0a6309,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:25.205080 env[1434]: time="2025-08-13T00:51:25.205039477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8rjlq,Uid:a031696d-4496-4a25-99be-9c5af74d0bff,Namespace:kube-system,Attempt:0,}" Aug 13 00:51:27.354113 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Aug 13 00:51:27.354251 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 00:51:27.360523 systemd-networkd[1597]: cilium_host: Link UP Aug 13 00:51:27.360750 systemd-networkd[1597]: cilium_net: Link UP Aug 13 00:51:27.360964 systemd-networkd[1597]: cilium_net: Gained carrier Aug 13 00:51:27.361285 systemd-networkd[1597]: cilium_host: Gained carrier Aug 13 00:51:27.365150 systemd-networkd[1597]: cilium_net: Gained IPv6LL Aug 13 00:51:27.589416 systemd-networkd[1597]: cilium_vxlan: Link UP Aug 13 00:51:27.589430 systemd-networkd[1597]: cilium_vxlan: Gained carrier Aug 13 00:51:27.902084 kernel: NET: Registered PF_ALG protocol family Aug 13 00:51:28.053221 systemd-networkd[1597]: cilium_host: Gained IPv6LL Aug 13 00:51:28.837263 systemd-networkd[1597]: lxc_health: Link UP Aug 13 00:51:28.851808 systemd-networkd[1597]: lxc_health: Gained carrier Aug 13 00:51:28.852014 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:51:29.258937 systemd-networkd[1597]: lxcdd7fb8895447: Link UP Aug 13 00:51:29.264019 kernel: eth0: renamed from tmp2c186 Aug 13 00:51:29.272740 systemd-networkd[1597]: lxcdd7fb8895447: Gained carrier Aug 13 00:51:29.273032 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdd7fb8895447: link becomes ready Aug 13 00:51:29.289383 systemd-networkd[1597]: lxcf9c39ec93331: Link UP Aug 13 00:51:29.305296 kernel: eth0: renamed from tmp9fdf4 Aug 13 00:51:29.316017 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf9c39ec93331: link becomes ready Aug 13 00:51:29.315875 systemd-networkd[1597]: lxcf9c39ec93331: Gained carrier Aug 13 00:51:29.397234 systemd-networkd[1597]: cilium_vxlan: Gained IPv6LL Aug 13 00:51:30.165154 systemd-networkd[1597]: lxc_health: Gained IPv6LL Aug 13 00:51:30.639021 kubelet[2438]: I0813 00:51:30.638930 2438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9vs86" podStartSLOduration=14.523315491 podStartE2EDuration="22.638907318s" podCreationTimestamp="2025-08-13 00:51:08 +0000 UTC" firstStartedPulling="2025-08-13 00:51:08.724880314 +0000 UTC m=+4.384785545" lastFinishedPulling="2025-08-13 00:51:16.840472141 +0000 UTC m=+12.500377372" observedRunningTime="2025-08-13 00:51:25.585278568 +0000 UTC m=+21.245183799" watchObservedRunningTime="2025-08-13 00:51:30.638907318 +0000 UTC m=+26.298812549" Aug 13 00:51:30.741129 systemd-networkd[1597]: lxcf9c39ec93331: Gained IPv6LL Aug 13 00:51:30.805149 systemd-networkd[1597]: lxcdd7fb8895447: Gained IPv6LL Aug 13 00:51:32.899973 env[1434]: time="2025-08-13T00:51:32.899910011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:51:32.901891 env[1434]: time="2025-08-13T00:51:32.901829340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:51:32.902091 env[1434]: time="2025-08-13T00:51:32.902060944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:51:32.902831 env[1434]: time="2025-08-13T00:51:32.902795355Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c1863045b8cc1239f4fdd5b4f7862ca26a137de95bc70dec548c2fd2e3d4300 pid=3618 runtime=io.containerd.runc.v2 Aug 13 00:51:32.921349 env[1434]: time="2025-08-13T00:51:32.921159435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:51:32.921349 env[1434]: time="2025-08-13T00:51:32.921202936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:51:32.921349 env[1434]: time="2025-08-13T00:51:32.921219236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:51:32.923171 env[1434]: time="2025-08-13T00:51:32.923112465Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9fdf40cafcbd481523c433a3d011fc5704f2eec2534d54e5af239826f33f12f3 pid=3630 runtime=io.containerd.runc.v2 Aug 13 00:51:32.930891 systemd[1]: Started cri-containerd-2c1863045b8cc1239f4fdd5b4f7862ca26a137de95bc70dec548c2fd2e3d4300.scope. Aug 13 00:51:32.970685 systemd[1]: Started cri-containerd-9fdf40cafcbd481523c433a3d011fc5704f2eec2534d54e5af239826f33f12f3.scope. Aug 13 00:51:33.089535 env[1434]: time="2025-08-13T00:51:33.089485375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w6v6c,Uid:a4eea6c1-9231-4db6-be7b-1a184a0a6309,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c1863045b8cc1239f4fdd5b4f7862ca26a137de95bc70dec548c2fd2e3d4300\"" Aug 13 00:51:33.095330 env[1434]: time="2025-08-13T00:51:33.095289961Z" level=info msg="CreateContainer within sandbox \"2c1863045b8cc1239f4fdd5b4f7862ca26a137de95bc70dec548c2fd2e3d4300\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:51:33.097679 env[1434]: time="2025-08-13T00:51:33.097636297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8rjlq,Uid:a031696d-4496-4a25-99be-9c5af74d0bff,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fdf40cafcbd481523c433a3d011fc5704f2eec2534d54e5af239826f33f12f3\"" Aug 13 00:51:33.100561 env[1434]: time="2025-08-13T00:51:33.100528540Z" level=info msg="CreateContainer within sandbox \"9fdf40cafcbd481523c433a3d011fc5704f2eec2534d54e5af239826f33f12f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:51:33.135565 env[1434]: time="2025-08-13T00:51:33.135531663Z" level=info msg="CreateContainer within sandbox \"2c1863045b8cc1239f4fdd5b4f7862ca26a137de95bc70dec548c2fd2e3d4300\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8e09fb618c30649a0b1002e1ba9d4ce54d39f87f0e1e1e687603fd971b5e8eb\"" Aug 13 00:51:33.136301 env[1434]: time="2025-08-13T00:51:33.136271874Z" level=info msg="StartContainer for \"b8e09fb618c30649a0b1002e1ba9d4ce54d39f87f0e1e1e687603fd971b5e8eb\"" Aug 13 00:51:33.151301 env[1434]: time="2025-08-13T00:51:33.150671189Z" level=info msg="CreateContainer within sandbox \"9fdf40cafcbd481523c433a3d011fc5704f2eec2534d54e5af239826f33f12f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8be1126e7302bfbc0b55f9cceba669be0423a8285aefb4662c41510b8a9284b4\"" Aug 13 00:51:33.155858 systemd[1]: Started cri-containerd-b8e09fb618c30649a0b1002e1ba9d4ce54d39f87f0e1e1e687603fd971b5e8eb.scope. Aug 13 00:51:33.158376 env[1434]: time="2025-08-13T00:51:33.158345404Z" level=info msg="StartContainer for \"8be1126e7302bfbc0b55f9cceba669be0423a8285aefb4662c41510b8a9284b4\"" Aug 13 00:51:33.182984 systemd[1]: Started cri-containerd-8be1126e7302bfbc0b55f9cceba669be0423a8285aefb4662c41510b8a9284b4.scope. Aug 13 00:51:33.213903 env[1434]: time="2025-08-13T00:51:33.213853534Z" level=info msg="StartContainer for \"b8e09fb618c30649a0b1002e1ba9d4ce54d39f87f0e1e1e687603fd971b5e8eb\" returns successfully" Aug 13 00:51:33.236217 env[1434]: time="2025-08-13T00:51:33.236158367Z" level=info msg="StartContainer for \"8be1126e7302bfbc0b55f9cceba669be0423a8285aefb4662c41510b8a9284b4\" returns successfully" Aug 13 00:51:33.614546 kubelet[2438]: I0813 00:51:33.614490 2438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-w6v6c" podStartSLOduration=25.614474922 podStartE2EDuration="25.614474922s" podCreationTimestamp="2025-08-13 00:51:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:51:33.612934999 +0000 UTC m=+29.272840230" watchObservedRunningTime="2025-08-13 00:51:33.614474922 +0000 UTC m=+29.274380153" Aug 13 00:51:33.615067 kubelet[2438]: I0813 00:51:33.614584 2438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8rjlq" podStartSLOduration=25.614578223 podStartE2EDuration="25.614578223s" podCreationTimestamp="2025-08-13 00:51:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:51:33.596604555 +0000 UTC m=+29.256509886" watchObservedRunningTime="2025-08-13 00:51:33.614578223 +0000 UTC m=+29.274483554" Aug 13 00:51:33.909083 systemd[1]: run-containerd-runc-k8s.io-9fdf40cafcbd481523c433a3d011fc5704f2eec2534d54e5af239826f33f12f3-runc.dzojjx.mount: Deactivated successfully. Aug 13 00:53:41.030714 systemd[1]: Started sshd@5-10.200.4.36:22-10.200.16.10:47752.service. Aug 13 00:53:41.619513 sshd[3792]: Accepted publickey for core from 10.200.16.10 port 47752 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:53:41.621183 sshd[3792]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:41.627807 systemd[1]: Started session-8.scope. Aug 13 00:53:41.629641 systemd-logind[1420]: New session 8 of user core. Aug 13 00:53:42.169776 sshd[3792]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:42.173207 systemd[1]: sshd@5-10.200.4.36:22-10.200.16.10:47752.service: Deactivated successfully. Aug 13 00:53:42.174366 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:53:42.175327 systemd-logind[1420]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:53:42.176386 systemd-logind[1420]: Removed session 8. Aug 13 00:53:47.270530 systemd[1]: Started sshd@6-10.200.4.36:22-10.200.16.10:47762.service. Aug 13 00:53:47.862608 sshd[3804]: Accepted publickey for core from 10.200.16.10 port 47762 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:53:47.864231 sshd[3804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:47.869777 systemd[1]: Started session-9.scope. Aug 13 00:53:47.870273 systemd-logind[1420]: New session 9 of user core. Aug 13 00:53:48.343969 sshd[3804]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:48.347419 systemd[1]: sshd@6-10.200.4.36:22-10.200.16.10:47762.service: Deactivated successfully. Aug 13 00:53:48.348400 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:53:48.349092 systemd-logind[1420]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:53:48.349880 systemd-logind[1420]: Removed session 9. Aug 13 00:53:53.444303 systemd[1]: Started sshd@7-10.200.4.36:22-10.200.16.10:56228.service. Aug 13 00:53:54.037543 sshd[3817]: Accepted publickey for core from 10.200.16.10 port 56228 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:53:54.039042 sshd[3817]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:54.043985 systemd[1]: Started session-10.scope. Aug 13 00:53:54.044481 systemd-logind[1420]: New session 10 of user core. Aug 13 00:53:54.518360 sshd[3817]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:54.521939 systemd[1]: sshd@7-10.200.4.36:22-10.200.16.10:56228.service: Deactivated successfully. Aug 13 00:53:54.523025 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:53:54.523873 systemd-logind[1420]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:53:54.524861 systemd-logind[1420]: Removed session 10. Aug 13 00:53:59.617278 systemd[1]: Started sshd@8-10.200.4.36:22-10.200.16.10:56232.service. Aug 13 00:54:00.210498 sshd[3831]: Accepted publickey for core from 10.200.16.10 port 56232 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:54:00.212376 sshd[3831]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:00.217243 systemd-logind[1420]: New session 11 of user core. Aug 13 00:54:00.217934 systemd[1]: Started session-11.scope. Aug 13 00:54:00.692407 sshd[3831]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:00.695269 systemd[1]: sshd@8-10.200.4.36:22-10.200.16.10:56232.service: Deactivated successfully. Aug 13 00:54:00.696116 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:54:00.697087 systemd-logind[1420]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:54:00.697892 systemd-logind[1420]: Removed session 11. Aug 13 00:54:05.792832 systemd[1]: Started sshd@9-10.200.4.36:22-10.200.16.10:35918.service. Aug 13 00:54:06.386789 sshd[3846]: Accepted publickey for core from 10.200.16.10 port 35918 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:54:06.388396 sshd[3846]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:06.392563 systemd-logind[1420]: New session 12 of user core. Aug 13 00:54:06.393626 systemd[1]: Started session-12.scope. Aug 13 00:54:06.869638 sshd[3846]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:06.873157 systemd[1]: sshd@9-10.200.4.36:22-10.200.16.10:35918.service: Deactivated successfully. Aug 13 00:54:06.874312 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:54:06.875267 systemd-logind[1420]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:54:06.876335 systemd-logind[1420]: Removed session 12. Aug 13 00:54:06.969417 systemd[1]: Started sshd@10-10.200.4.36:22-10.200.16.10:35920.service. Aug 13 00:54:07.562165 sshd[3858]: Accepted publickey for core from 10.200.16.10 port 35920 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:54:07.563965 sshd[3858]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:07.569754 systemd-logind[1420]: New session 13 of user core. Aug 13 00:54:07.570063 systemd[1]: Started session-13.scope. Aug 13 00:54:08.077881 sshd[3858]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:08.080920 systemd[1]: sshd@10-10.200.4.36:22-10.200.16.10:35920.service: Deactivated successfully. Aug 13 00:54:08.082232 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:54:08.082275 systemd-logind[1420]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:54:08.083320 systemd-logind[1420]: Removed session 13. Aug 13 00:54:08.176722 systemd[1]: Started sshd@11-10.200.4.36:22-10.200.16.10:35934.service. Aug 13 00:54:08.763476 sshd[3868]: Accepted publickey for core from 10.200.16.10 port 35934 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:54:08.765220 sshd[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:08.770334 systemd-logind[1420]: New session 14 of user core. Aug 13 00:54:08.770868 systemd[1]: Started session-14.scope. Aug 13 00:54:09.245922 sshd[3868]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:09.250870 systemd[1]: sshd@11-10.200.4.36:22-10.200.16.10:35934.service: Deactivated successfully. Aug 13 00:54:09.252128 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:54:09.253141 systemd-logind[1420]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:54:09.254287 systemd-logind[1420]: Removed session 14. Aug 13 00:54:14.346238 systemd[1]: Started sshd@12-10.200.4.36:22-10.200.16.10:54598.service. Aug 13 00:54:14.938900 sshd[3881]: Accepted publickey for core from 10.200.16.10 port 54598 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:54:14.940787 sshd[3881]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:14.946773 systemd[1]: Started session-15.scope. Aug 13 00:54:14.947351 systemd-logind[1420]: New session 15 of user core. Aug 13 00:54:15.428439 sshd[3881]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:15.432691 systemd[1]: sshd@12-10.200.4.36:22-10.200.16.10:54598.service: Deactivated successfully. Aug 13 00:54:15.433939 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:54:15.435074 systemd-logind[1420]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:54:15.436314 systemd-logind[1420]: Removed session 15. Aug 13 00:54:15.528066 systemd[1]: Started sshd@13-10.200.4.36:22-10.200.16.10:54604.service. Aug 13 00:54:16.117451 sshd[3893]: Accepted publickey for core from 10.200.16.10 port 54604 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:54:16.119755 sshd[3893]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:16.125273 systemd[1]: Started session-16.scope. Aug 13 00:54:16.125721 systemd-logind[1420]: New session 16 of user core. Aug 13 00:54:16.626725 sshd[3893]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:16.630301 systemd[1]: sshd@13-10.200.4.36:22-10.200.16.10:54604.service: Deactivated successfully. Aug 13 00:54:16.631403 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:54:16.632240 systemd-logind[1420]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:54:16.633032 systemd-logind[1420]: Removed session 16. Aug 13 00:54:16.726244 systemd[1]: Started sshd@14-10.200.4.36:22-10.200.16.10:54608.service. Aug 13 00:54:17.318205 sshd[3903]: Accepted publickey for core from 10.200.16.10 port 54608 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:54:17.319893 sshd[3903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:17.326078 systemd-logind[1420]: New session 17 of user core. Aug 13 00:54:17.326218 systemd[1]: Started session-17.scope. Aug 13 00:54:18.962497 sshd[3903]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:18.966050 systemd[1]: sshd@14-10.200.4.36:22-10.200.16.10:54608.service: Deactivated successfully. Aug 13 00:54:18.967474 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:54:18.967535 systemd-logind[1420]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:54:18.969199 systemd-logind[1420]: Removed session 17. Aug 13 00:54:19.061252 systemd[1]: Started sshd@15-10.200.4.36:22-10.200.16.10:54614.service. Aug 13 00:54:19.652633 sshd[3920]: Accepted publickey for core from 10.200.16.10 port 54614 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:54:19.654053 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:19.659379 systemd[1]: Started session-18.scope. Aug 13 00:54:19.659868 systemd-logind[1420]: New session 18 of user core. Aug 13 00:54:20.241345 sshd[3920]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:20.245415 systemd[1]: sshd@15-10.200.4.36:22-10.200.16.10:54614.service: Deactivated successfully. Aug 13 00:54:20.246471 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:54:20.247378 systemd-logind[1420]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:54:20.248416 systemd-logind[1420]: Removed session 18. Aug 13 00:54:20.340299 systemd[1]: Started sshd@16-10.200.4.36:22-10.200.16.10:49156.service. Aug 13 00:54:20.929496 sshd[3929]: Accepted publickey for core from 10.200.16.10 port 49156 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:54:20.931687 sshd[3929]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:20.937114 systemd[1]: Started session-19.scope. Aug 13 00:54:20.937614 systemd-logind[1420]: New session 19 of user core. Aug 13 00:54:21.419320 sshd[3929]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:21.422644 systemd[1]: sshd@16-10.200.4.36:22-10.200.16.10:49156.service: Deactivated successfully. Aug 13 00:54:21.423759 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:54:21.424617 systemd-logind[1420]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:54:21.425627 systemd-logind[1420]: Removed session 19. Aug 13 00:54:26.520117 systemd[1]: Started sshd@17-10.200.4.36:22-10.200.16.10:49160.service. Aug 13 00:54:27.110025 sshd[3944]: Accepted publickey for core from 10.200.16.10 port 49160 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:54:27.111725 sshd[3944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:27.117547 systemd[1]: Started session-20.scope. Aug 13 00:54:27.118160 systemd-logind[1420]: New session 20 of user core. Aug 13 00:54:27.605750 sshd[3944]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:27.609209 systemd[1]: sshd@17-10.200.4.36:22-10.200.16.10:49160.service: Deactivated successfully. Aug 13 00:54:27.610352 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:54:27.611248 systemd-logind[1420]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:54:27.612423 systemd-logind[1420]: Removed session 20. Aug 13 00:54:32.707089 systemd[1]: Started sshd@18-10.200.4.36:22-10.200.16.10:35878.service. Aug 13 00:54:33.303501 sshd[3957]: Accepted publickey for core from 10.200.16.10 port 35878 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:54:33.305296 sshd[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:33.312186 systemd[1]: Started session-21.scope. Aug 13 00:54:33.314206 systemd-logind[1420]: New session 21 of user core. Aug 13 00:54:33.783536 sshd[3957]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:33.787331 systemd[1]: sshd@18-10.200.4.36:22-10.200.16.10:35878.service: Deactivated successfully. Aug 13 00:54:33.788374 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:54:33.789498 systemd-logind[1420]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:54:33.790591 systemd-logind[1420]: Removed session 21. Aug 13 00:54:38.882366 systemd[1]: Started sshd@19-10.200.4.36:22-10.200.16.10:35888.service. Aug 13 00:54:39.470574 sshd[3972]: Accepted publickey for core from 10.200.16.10 port 35888 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:54:39.472387 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:39.478194 systemd[1]: Started session-22.scope. Aug 13 00:54:39.478567 systemd-logind[1420]: New session 22 of user core. Aug 13 00:54:39.948403 sshd[3972]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:39.951878 systemd[1]: sshd@19-10.200.4.36:22-10.200.16.10:35888.service: Deactivated successfully. Aug 13 00:54:39.953039 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:54:39.953862 systemd-logind[1420]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:54:39.954864 systemd-logind[1420]: Removed session 22. Aug 13 00:54:40.056036 systemd[1]: Started sshd@20-10.200.4.36:22-10.200.16.10:35896.service. Aug 13 00:54:40.645108 sshd[3986]: Accepted publickey for core from 10.200.16.10 port 35896 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:54:40.646567 sshd[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:40.651563 systemd[1]: Started session-23.scope. Aug 13 00:54:40.652197 systemd-logind[1420]: New session 23 of user core. Aug 13 00:54:42.277175 env[1434]: time="2025-08-13T00:54:42.277127166Z" level=info msg="StopContainer for \"d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d\" with timeout 30 (s)" Aug 13 00:54:42.278448 env[1434]: time="2025-08-13T00:54:42.278411288Z" level=info msg="Stop container \"d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d\" with signal terminated" Aug 13 00:54:42.292965 env[1434]: time="2025-08-13T00:54:42.292871428Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:54:42.297782 systemd[1]: cri-containerd-d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d.scope: Deactivated successfully. Aug 13 00:54:42.303301 env[1434]: time="2025-08-13T00:54:42.303238201Z" level=info msg="StopContainer for \"bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33\" with timeout 2 (s)" Aug 13 00:54:42.303584 env[1434]: time="2025-08-13T00:54:42.303538106Z" level=info msg="Stop container \"bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33\" with signal terminated" Aug 13 00:54:42.312968 systemd-networkd[1597]: lxc_health: Link DOWN Aug 13 00:54:42.312977 systemd-networkd[1597]: lxc_health: Lost carrier Aug 13 00:54:42.334306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d-rootfs.mount: Deactivated successfully. Aug 13 00:54:42.340686 systemd[1]: cri-containerd-bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33.scope: Deactivated successfully. Aug 13 00:54:42.340967 systemd[1]: cri-containerd-bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33.scope: Consumed 7.158s CPU time. Aug 13 00:54:42.361899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33-rootfs.mount: Deactivated successfully. Aug 13 00:54:42.378406 env[1434]: time="2025-08-13T00:54:42.378356251Z" level=info msg="shim disconnected" id=d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d Aug 13 00:54:42.378406 env[1434]: time="2025-08-13T00:54:42.378407452Z" level=warning msg="cleaning up after shim disconnected" id=d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d namespace=k8s.io Aug 13 00:54:42.378629 env[1434]: time="2025-08-13T00:54:42.378418952Z" level=info msg="cleaning up dead shim" Aug 13 00:54:42.379880 env[1434]: time="2025-08-13T00:54:42.379834375Z" level=info msg="shim disconnected" id=bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33 Aug 13 00:54:42.380169 env[1434]: time="2025-08-13T00:54:42.380143880Z" level=warning msg="cleaning up after shim disconnected" id=bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33 namespace=k8s.io Aug 13 00:54:42.380294 env[1434]: time="2025-08-13T00:54:42.380272583Z" level=info msg="cleaning up dead shim" Aug 13 00:54:42.388512 env[1434]: time="2025-08-13T00:54:42.388468119Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4057 runtime=io.containerd.runc.v2\n" Aug 13 00:54:42.393574 env[1434]: time="2025-08-13T00:54:42.393531303Z" level=info msg="StopContainer for \"d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d\" returns successfully" Aug 13 00:54:42.394129 env[1434]: time="2025-08-13T00:54:42.393846708Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4061 runtime=io.containerd.runc.v2\n" Aug 13 00:54:42.394733 env[1434]: time="2025-08-13T00:54:42.394703723Z" level=info msg="StopPodSandbox for \"c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa\"" Aug 13 00:54:42.394828 env[1434]: time="2025-08-13T00:54:42.394776324Z" level=info msg="Container to stop \"d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:54:42.397484 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa-shm.mount: Deactivated successfully. Aug 13 00:54:42.400186 env[1434]: time="2025-08-13T00:54:42.400151013Z" level=info msg="StopContainer for \"bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33\" returns successfully" Aug 13 00:54:42.400803 env[1434]: time="2025-08-13T00:54:42.400775724Z" level=info msg="StopPodSandbox for \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\"" Aug 13 00:54:42.400987 env[1434]: time="2025-08-13T00:54:42.400959227Z" level=info msg="Container to stop \"da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:54:42.401159 env[1434]: time="2025-08-13T00:54:42.401134030Z" level=info msg="Container to stop \"bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:54:42.401321 env[1434]: time="2025-08-13T00:54:42.401297332Z" level=info msg="Container to stop \"75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:54:42.401445 env[1434]: time="2025-08-13T00:54:42.401418834Z" level=info msg="Container to stop \"6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:54:42.401563 env[1434]: time="2025-08-13T00:54:42.401542336Z" level=info msg="Container to stop \"15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:54:42.404204 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f-shm.mount: Deactivated successfully. Aug 13 00:54:42.415256 systemd[1]: cri-containerd-c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa.scope: Deactivated successfully. Aug 13 00:54:42.418976 systemd[1]: cri-containerd-5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f.scope: Deactivated successfully. Aug 13 00:54:42.456277 env[1434]: time="2025-08-13T00:54:42.456221546Z" level=info msg="shim disconnected" id=c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa Aug 13 00:54:42.457029 env[1434]: time="2025-08-13T00:54:42.456980359Z" level=warning msg="cleaning up after shim disconnected" id=c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa namespace=k8s.io Aug 13 00:54:42.457239 env[1434]: time="2025-08-13T00:54:42.457213063Z" level=info msg="cleaning up dead shim" Aug 13 00:54:42.457357 env[1434]: time="2025-08-13T00:54:42.456228046Z" level=info msg="shim disconnected" id=5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f Aug 13 00:54:42.457477 env[1434]: time="2025-08-13T00:54:42.457458167Z" level=warning msg="cleaning up after shim disconnected" id=5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f namespace=k8s.io Aug 13 00:54:42.457571 env[1434]: time="2025-08-13T00:54:42.457554568Z" level=info msg="cleaning up dead shim" Aug 13 00:54:42.467187 env[1434]: time="2025-08-13T00:54:42.467148528Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4123 runtime=io.containerd.runc.v2\n" Aug 13 00:54:42.467538 env[1434]: time="2025-08-13T00:54:42.467500134Z" level=info msg="TearDown network for sandbox \"c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa\" successfully" Aug 13 00:54:42.467621 env[1434]: time="2025-08-13T00:54:42.467537935Z" level=info msg="StopPodSandbox for \"c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa\" returns successfully" Aug 13 00:54:42.471101 env[1434]: time="2025-08-13T00:54:42.471074393Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4124 runtime=io.containerd.runc.v2\n" Aug 13 00:54:42.471772 env[1434]: time="2025-08-13T00:54:42.471745405Z" level=info msg="TearDown network for sandbox \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\" successfully" Aug 13 00:54:42.473887 env[1434]: time="2025-08-13T00:54:42.473860540Z" level=info msg="StopPodSandbox for \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\" returns successfully" Aug 13 00:54:42.592272 kubelet[2438]: I0813 00:54:42.592220 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-cilium-run\") pod \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " Aug 13 00:54:42.592775 kubelet[2438]: I0813 00:54:42.592312 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" (UID: "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:42.592775 kubelet[2438]: I0813 00:54:42.592395 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-cilium-cgroup\") pod \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " Aug 13 00:54:42.592775 kubelet[2438]: I0813 00:54:42.592456 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" (UID: "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:42.592775 kubelet[2438]: I0813 00:54:42.592429 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-cilium-config-path\") pod \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " Aug 13 00:54:42.592775 kubelet[2438]: I0813 00:54:42.592498 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x4pv\" (UniqueName: \"kubernetes.io/projected/10c3e8b7-cc8a-406f-bca5-8f634ceadf5d-kube-api-access-6x4pv\") pod \"10c3e8b7-cc8a-406f-bca5-8f634ceadf5d\" (UID: \"10c3e8b7-cc8a-406f-bca5-8f634ceadf5d\") " Aug 13 00:54:42.593081 kubelet[2438]: I0813 00:54:42.593059 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-etc-cni-netd\") pod \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " Aug 13 00:54:42.593207 kubelet[2438]: I0813 00:54:42.593193 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-host-proc-sys-kernel\") pod \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " Aug 13 00:54:42.593312 kubelet[2438]: I0813 00:54:42.593299 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-lib-modules\") pod \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " Aug 13 00:54:42.593408 kubelet[2438]: I0813 00:54:42.593395 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-cni-path\") pod \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " Aug 13 00:54:42.593509 kubelet[2438]: I0813 00:54:42.593495 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-hostproc\") pod \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " Aug 13 00:54:42.593614 kubelet[2438]: I0813 00:54:42.593600 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10c3e8b7-cc8a-406f-bca5-8f634ceadf5d-cilium-config-path\") pod \"10c3e8b7-cc8a-406f-bca5-8f634ceadf5d\" (UID: \"10c3e8b7-cc8a-406f-bca5-8f634ceadf5d\") " Aug 13 00:54:42.593724 kubelet[2438]: I0813 00:54:42.593709 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-hubble-tls\") pod \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " Aug 13 00:54:42.593834 kubelet[2438]: I0813 00:54:42.593821 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-host-proc-sys-net\") pod \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " Aug 13 00:54:42.593937 kubelet[2438]: I0813 00:54:42.593922 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-bpf-maps\") pod \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " Aug 13 00:54:42.594062 kubelet[2438]: I0813 00:54:42.594045 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2gnj\" (UniqueName: \"kubernetes.io/projected/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-kube-api-access-z2gnj\") pod \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " Aug 13 00:54:42.594189 kubelet[2438]: I0813 00:54:42.594166 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-clustermesh-secrets\") pod \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " Aug 13 00:54:42.594300 kubelet[2438]: I0813 00:54:42.594287 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-xtables-lock\") pod \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\" (UID: \"0aad5066-d6e7-43d3-a77d-c2a5b1d926a3\") " Aug 13 00:54:42.594424 kubelet[2438]: I0813 00:54:42.594411 2438 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-cilium-run\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:42.594517 kubelet[2438]: I0813 00:54:42.594506 2438 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-cilium-cgroup\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:42.594624 kubelet[2438]: I0813 00:54:42.594609 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" (UID: "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:42.595306 kubelet[2438]: I0813 00:54:42.595279 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" (UID: "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:42.595447 kubelet[2438]: I0813 00:54:42.595292 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" (UID: "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:54:42.597972 kubelet[2438]: I0813 00:54:42.597940 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c3e8b7-cc8a-406f-bca5-8f634ceadf5d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "10c3e8b7-cc8a-406f-bca5-8f634ceadf5d" (UID: "10c3e8b7-cc8a-406f-bca5-8f634ceadf5d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:54:42.598094 kubelet[2438]: I0813 00:54:42.598009 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" (UID: "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:42.598094 kubelet[2438]: I0813 00:54:42.598041 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" (UID: "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:42.598094 kubelet[2438]: I0813 00:54:42.598066 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-cni-path" (OuterVolumeSpecName: "cni-path") pod "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" (UID: "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:42.598094 kubelet[2438]: I0813 00:54:42.598084 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-hostproc" (OuterVolumeSpecName: "hostproc") pod "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" (UID: "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:42.598420 kubelet[2438]: I0813 00:54:42.598401 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" (UID: "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:42.598547 kubelet[2438]: I0813 00:54:42.598531 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" (UID: "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:42.600256 kubelet[2438]: I0813 00:54:42.600227 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c3e8b7-cc8a-406f-bca5-8f634ceadf5d-kube-api-access-6x4pv" (OuterVolumeSpecName: "kube-api-access-6x4pv") pod "10c3e8b7-cc8a-406f-bca5-8f634ceadf5d" (UID: "10c3e8b7-cc8a-406f-bca5-8f634ceadf5d"). InnerVolumeSpecName "kube-api-access-6x4pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:54:42.604150 kubelet[2438]: I0813 00:54:42.604123 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" (UID: "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:54:42.604395 kubelet[2438]: I0813 00:54:42.604373 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-kube-api-access-z2gnj" (OuterVolumeSpecName: "kube-api-access-z2gnj") pod "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" (UID: "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3"). InnerVolumeSpecName "kube-api-access-z2gnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:54:42.607081 kubelet[2438]: I0813 00:54:42.607050 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" (UID: "0aad5066-d6e7-43d3-a77d-c2a5b1d926a3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:54:42.695746 kubelet[2438]: I0813 00:54:42.695696 2438 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-clustermesh-secrets\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:42.695746 kubelet[2438]: I0813 00:54:42.695741 2438 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-xtables-lock\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:42.695746 kubelet[2438]: I0813 00:54:42.695758 2438 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-cilium-config-path\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:42.696114 kubelet[2438]: I0813 00:54:42.695772 2438 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x4pv\" (UniqueName: \"kubernetes.io/projected/10c3e8b7-cc8a-406f-bca5-8f634ceadf5d-kube-api-access-6x4pv\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:42.696114 kubelet[2438]: I0813 00:54:42.695789 2438 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-etc-cni-netd\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:42.696114 kubelet[2438]: I0813 00:54:42.695805 2438 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-host-proc-sys-kernel\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:42.696114 kubelet[2438]: I0813 00:54:42.695818 2438 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-lib-modules\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:42.696114 kubelet[2438]: I0813 00:54:42.695830 2438 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-cni-path\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:42.696114 kubelet[2438]: I0813 00:54:42.695843 2438 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-hostproc\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:42.696114 kubelet[2438]: I0813 00:54:42.695856 2438 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10c3e8b7-cc8a-406f-bca5-8f634ceadf5d-cilium-config-path\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:42.696114 kubelet[2438]: I0813 00:54:42.695869 2438 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-hubble-tls\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:42.696369 kubelet[2438]: I0813 00:54:42.695883 2438 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-host-proc-sys-net\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:42.696369 kubelet[2438]: I0813 00:54:42.695899 2438 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-bpf-maps\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:42.696369 kubelet[2438]: I0813 00:54:42.695918 2438 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2gnj\" (UniqueName: \"kubernetes.io/projected/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3-kube-api-access-z2gnj\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:42.977946 kubelet[2438]: I0813 00:54:42.975571 2438 scope.go:117] "RemoveContainer" containerID="d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d" Aug 13 00:54:42.982146 env[1434]: time="2025-08-13T00:54:42.982094696Z" level=info msg="RemoveContainer for \"d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d\"" Aug 13 00:54:42.986470 systemd[1]: Removed slice kubepods-besteffort-pod10c3e8b7_cc8a_406f_bca5_8f634ceadf5d.slice. Aug 13 00:54:42.991839 systemd[1]: Removed slice kubepods-burstable-pod0aad5066_d6e7_43d3_a77d_c2a5b1d926a3.slice. Aug 13 00:54:42.991958 systemd[1]: kubepods-burstable-pod0aad5066_d6e7_43d3_a77d_c2a5b1d926a3.slice: Consumed 7.275s CPU time. Aug 13 00:54:42.995681 env[1434]: time="2025-08-13T00:54:42.995620821Z" level=info msg="RemoveContainer for \"d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d\" returns successfully" Aug 13 00:54:42.997075 kubelet[2438]: I0813 00:54:42.997052 2438 scope.go:117] "RemoveContainer" containerID="d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d" Aug 13 00:54:42.997642 env[1434]: time="2025-08-13T00:54:42.997530153Z" level=error msg="ContainerStatus for \"d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d\": not found" Aug 13 00:54:42.999088 kubelet[2438]: E0813 00:54:42.997833 2438 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d\": not found" containerID="d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d" Aug 13 00:54:42.999088 kubelet[2438]: I0813 00:54:42.997867 2438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d"} err="failed to get container status \"d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9d89b9da899398f795db7d9fd0a3e2fb01d61ae884323986cea541aabec5b7d\": not found" Aug 13 00:54:42.999088 kubelet[2438]: I0813 00:54:42.998000 2438 scope.go:117] "RemoveContainer" containerID="bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33" Aug 13 00:54:42.999410 env[1434]: time="2025-08-13T00:54:42.999378284Z" level=info msg="RemoveContainer for \"bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33\"" Aug 13 00:54:43.007647 env[1434]: time="2025-08-13T00:54:43.007611420Z" level=info msg="RemoveContainer for \"bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33\" returns successfully" Aug 13 00:54:43.008477 kubelet[2438]: I0813 00:54:43.008443 2438 scope.go:117] "RemoveContainer" containerID="15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3" Aug 13 00:54:43.012356 env[1434]: time="2025-08-13T00:54:43.012317598Z" level=info msg="RemoveContainer for \"15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3\"" Aug 13 00:54:43.019602 env[1434]: time="2025-08-13T00:54:43.019571218Z" level=info msg="RemoveContainer for \"15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3\" returns successfully" Aug 13 00:54:43.019796 kubelet[2438]: I0813 00:54:43.019771 2438 scope.go:117] "RemoveContainer" containerID="da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7" Aug 13 00:54:43.022651 env[1434]: time="2025-08-13T00:54:43.022603068Z" level=info msg="RemoveContainer for \"da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7\"" Aug 13 00:54:43.033071 env[1434]: time="2025-08-13T00:54:43.033035940Z" level=info msg="RemoveContainer for \"da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7\" returns successfully" Aug 13 00:54:43.033284 kubelet[2438]: I0813 00:54:43.033229 2438 scope.go:117] "RemoveContainer" containerID="6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561" Aug 13 00:54:43.034297 env[1434]: time="2025-08-13T00:54:43.034270861Z" level=info msg="RemoveContainer for \"6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561\"" Aug 13 00:54:43.042191 env[1434]: time="2025-08-13T00:54:43.042159891Z" level=info msg="RemoveContainer for \"6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561\" returns successfully" Aug 13 00:54:43.042341 kubelet[2438]: I0813 00:54:43.042325 2438 scope.go:117] "RemoveContainer" containerID="75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d" Aug 13 00:54:43.043331 env[1434]: time="2025-08-13T00:54:43.043299510Z" level=info msg="RemoveContainer for \"75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d\"" Aug 13 00:54:43.050453 env[1434]: time="2025-08-13T00:54:43.050419728Z" level=info msg="RemoveContainer for \"75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d\" returns successfully" Aug 13 00:54:43.050592 kubelet[2438]: I0813 00:54:43.050571 2438 scope.go:117] "RemoveContainer" containerID="bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33" Aug 13 00:54:43.050853 env[1434]: time="2025-08-13T00:54:43.050783434Z" level=error msg="ContainerStatus for \"bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33\": not found" Aug 13 00:54:43.051250 kubelet[2438]: E0813 00:54:43.051018 2438 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33\": not found" containerID="bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33" Aug 13 00:54:43.051414 kubelet[2438]: I0813 00:54:43.051381 2438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33"} err="failed to get container status \"bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33\": rpc error: code = NotFound desc = an error occurred when try to find container \"bad4caefaf7a767e3475248c6573672961718e25c980d1df9376ed0ed06eaf33\": not found" Aug 13 00:54:43.051482 kubelet[2438]: I0813 00:54:43.051430 2438 scope.go:117] "RemoveContainer" containerID="15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3" Aug 13 00:54:43.051718 env[1434]: time="2025-08-13T00:54:43.051661148Z" level=error msg="ContainerStatus for \"15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3\": not found" Aug 13 00:54:43.051887 kubelet[2438]: E0813 00:54:43.051865 2438 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3\": not found" containerID="15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3" Aug 13 00:54:43.051959 kubelet[2438]: I0813 00:54:43.051898 2438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3"} err="failed to get container status \"15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"15b8e2791918dfeed657942d0be298cfb436492bdcf14a6c8614aba3e14413c3\": not found" Aug 13 00:54:43.051959 kubelet[2438]: I0813 00:54:43.051933 2438 scope.go:117] "RemoveContainer" containerID="da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7" Aug 13 00:54:43.053169 env[1434]: time="2025-08-13T00:54:43.053113172Z" level=error msg="ContainerStatus for \"da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7\": not found" Aug 13 00:54:43.054034 kubelet[2438]: E0813 00:54:43.054001 2438 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7\": not found" containerID="da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7" Aug 13 00:54:43.054119 kubelet[2438]: I0813 00:54:43.054041 2438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7"} err="failed to get container status \"da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7\": rpc error: code = NotFound desc = an error occurred when try to find container \"da5b0f9a94128dea9fa89cb9966f04d7f19cce1edfb341e5d27d8e7665ccc2c7\": not found" Aug 13 00:54:43.054119 kubelet[2438]: I0813 00:54:43.054075 2438 scope.go:117] "RemoveContainer" containerID="6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561" Aug 13 00:54:43.054343 env[1434]: time="2025-08-13T00:54:43.054290692Z" level=error msg="ContainerStatus for \"6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561\": not found" Aug 13 00:54:43.054506 kubelet[2438]: E0813 00:54:43.054457 2438 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561\": not found" containerID="6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561" Aug 13 00:54:43.054577 kubelet[2438]: I0813 00:54:43.054522 2438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561"} err="failed to get container status \"6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f650ae03f713207284b6574d3b1b042ae5829d30c81e41d89ee26b8e0fe6561\": not found" Aug 13 00:54:43.054577 kubelet[2438]: I0813 00:54:43.054557 2438 scope.go:117] "RemoveContainer" containerID="75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d" Aug 13 00:54:43.054810 env[1434]: time="2025-08-13T00:54:43.054758300Z" level=error msg="ContainerStatus for \"75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d\": not found" Aug 13 00:54:43.054944 kubelet[2438]: E0813 00:54:43.054914 2438 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d\": not found" containerID="75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d" Aug 13 00:54:43.055034 kubelet[2438]: I0813 00:54:43.054961 2438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d"} err="failed to get container status \"75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d\": rpc error: code = NotFound desc = an error occurred when try to find container \"75cf826cbb8d114a36ba051152220bebd02f24f401042838d87747925b802b1d\": not found" Aug 13 00:54:43.267859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa-rootfs.mount: Deactivated successfully. Aug 13 00:54:43.268446 systemd[1]: var-lib-kubelet-pods-10c3e8b7\x2dcc8a\x2d406f\x2dbca5\x2d8f634ceadf5d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6x4pv.mount: Deactivated successfully. Aug 13 00:54:43.268586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f-rootfs.mount: Deactivated successfully. Aug 13 00:54:43.268676 systemd[1]: var-lib-kubelet-pods-0aad5066\x2dd6e7\x2d43d3\x2da77d\x2dc2a5b1d926a3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz2gnj.mount: Deactivated successfully. Aug 13 00:54:43.268776 systemd[1]: var-lib-kubelet-pods-0aad5066\x2dd6e7\x2d43d3\x2da77d\x2dc2a5b1d926a3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:54:43.268863 systemd[1]: var-lib-kubelet-pods-0aad5066\x2dd6e7\x2d43d3\x2da77d\x2dc2a5b1d926a3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:54:44.319697 sshd[3986]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:44.323263 systemd[1]: sshd@20-10.200.4.36:22-10.200.16.10:35896.service: Deactivated successfully. Aug 13 00:54:44.324197 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:54:44.324846 systemd-logind[1420]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:54:44.325716 systemd-logind[1420]: Removed session 23. Aug 13 00:54:44.417403 systemd[1]: Started sshd@21-10.200.4.36:22-10.200.16.10:56844.service. Aug 13 00:54:44.475094 kubelet[2438]: I0813 00:54:44.475053 2438 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" path="/var/lib/kubelet/pods/0aad5066-d6e7-43d3-a77d-c2a5b1d926a3/volumes" Aug 13 00:54:44.475844 kubelet[2438]: I0813 00:54:44.475817 2438 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c3e8b7-cc8a-406f-bca5-8f634ceadf5d" path="/var/lib/kubelet/pods/10c3e8b7-cc8a-406f-bca5-8f634ceadf5d/volumes" Aug 13 00:54:44.574485 kubelet[2438]: E0813 00:54:44.574319 2438 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:54:45.008705 sshd[4158]: Accepted publickey for core from 10.200.16.10 port 56844 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:54:45.010183 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:45.015169 systemd[1]: Started session-24.scope. Aug 13 00:54:45.015625 systemd-logind[1420]: New session 24 of user core. Aug 13 00:54:46.061247 kubelet[2438]: E0813 00:54:46.061202 2438 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" containerName="apply-sysctl-overwrites" Aug 13 00:54:46.061844 kubelet[2438]: E0813 00:54:46.061821 2438 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10c3e8b7-cc8a-406f-bca5-8f634ceadf5d" containerName="cilium-operator" Aug 13 00:54:46.061969 kubelet[2438]: E0813 00:54:46.061955 2438 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" containerName="mount-bpf-fs" Aug 13 00:54:46.062125 kubelet[2438]: E0813 00:54:46.062109 2438 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" containerName="clean-cilium-state" Aug 13 00:54:46.062250 kubelet[2438]: E0813 00:54:46.062235 2438 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" containerName="mount-cgroup" Aug 13 00:54:46.062357 kubelet[2438]: E0813 00:54:46.062343 2438 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" containerName="cilium-agent" Aug 13 00:54:46.062520 kubelet[2438]: I0813 00:54:46.062495 2438 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c3e8b7-cc8a-406f-bca5-8f634ceadf5d" containerName="cilium-operator" Aug 13 00:54:46.062624 kubelet[2438]: I0813 00:54:46.062611 2438 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aad5066-d6e7-43d3-a77d-c2a5b1d926a3" containerName="cilium-agent" Aug 13 00:54:46.071261 systemd[1]: Created slice kubepods-burstable-podcca490cd_f665_47f0_99fa_3a8f120d006f.slice. Aug 13 00:54:46.149466 sshd[4158]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:46.152854 systemd[1]: sshd@21-10.200.4.36:22-10.200.16.10:56844.service: Deactivated successfully. Aug 13 00:54:46.153723 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:54:46.154409 systemd-logind[1420]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:54:46.155400 systemd-logind[1420]: Removed session 24. Aug 13 00:54:46.215024 kubelet[2438]: I0813 00:54:46.214961 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-cilium-run\") pod \"cilium-5jvvg\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " pod="kube-system/cilium-5jvvg" Aug 13 00:54:46.215236 kubelet[2438]: I0813 00:54:46.215036 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-xtables-lock\") pod \"cilium-5jvvg\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " pod="kube-system/cilium-5jvvg" Aug 13 00:54:46.215236 kubelet[2438]: I0813 00:54:46.215069 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwbj2\" (UniqueName: \"kubernetes.io/projected/cca490cd-f665-47f0-99fa-3a8f120d006f-kube-api-access-qwbj2\") pod \"cilium-5jvvg\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " pod="kube-system/cilium-5jvvg" Aug 13 00:54:46.215236 kubelet[2438]: I0813 00:54:46.215097 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-cni-path\") pod \"cilium-5jvvg\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " pod="kube-system/cilium-5jvvg" Aug 13 00:54:46.215236 kubelet[2438]: I0813 00:54:46.215121 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cca490cd-f665-47f0-99fa-3a8f120d006f-cilium-ipsec-secrets\") pod \"cilium-5jvvg\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " pod="kube-system/cilium-5jvvg" Aug 13 00:54:46.215236 kubelet[2438]: I0813 00:54:46.215145 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-host-proc-sys-net\") pod \"cilium-5jvvg\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " pod="kube-system/cilium-5jvvg" Aug 13 00:54:46.215522 kubelet[2438]: I0813 00:54:46.215170 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cca490cd-f665-47f0-99fa-3a8f120d006f-clustermesh-secrets\") pod \"cilium-5jvvg\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " pod="kube-system/cilium-5jvvg" Aug 13 00:54:46.215522 kubelet[2438]: I0813 00:54:46.215196 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cca490cd-f665-47f0-99fa-3a8f120d006f-cilium-config-path\") pod \"cilium-5jvvg\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " pod="kube-system/cilium-5jvvg" Aug 13 00:54:46.215522 kubelet[2438]: I0813 00:54:46.215219 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-bpf-maps\") pod \"cilium-5jvvg\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " pod="kube-system/cilium-5jvvg" Aug 13 00:54:46.215522 kubelet[2438]: I0813 00:54:46.215242 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-hostproc\") pod \"cilium-5jvvg\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " pod="kube-system/cilium-5jvvg" Aug 13 00:54:46.215522 kubelet[2438]: I0813 00:54:46.215268 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cca490cd-f665-47f0-99fa-3a8f120d006f-hubble-tls\") pod \"cilium-5jvvg\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " pod="kube-system/cilium-5jvvg" Aug 13 00:54:46.215522 kubelet[2438]: I0813 00:54:46.215300 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-etc-cni-netd\") pod \"cilium-5jvvg\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " pod="kube-system/cilium-5jvvg" Aug 13 00:54:46.215766 kubelet[2438]: I0813 00:54:46.215325 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-lib-modules\") pod \"cilium-5jvvg\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " pod="kube-system/cilium-5jvvg" Aug 13 00:54:46.215766 kubelet[2438]: I0813 00:54:46.215357 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-host-proc-sys-kernel\") pod \"cilium-5jvvg\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " pod="kube-system/cilium-5jvvg" Aug 13 00:54:46.215766 kubelet[2438]: I0813 00:54:46.215387 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-cilium-cgroup\") pod \"cilium-5jvvg\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " pod="kube-system/cilium-5jvvg" Aug 13 00:54:46.251022 systemd[1]: Started sshd@22-10.200.4.36:22-10.200.16.10:56846.service. Aug 13 00:54:46.383668 env[1434]: time="2025-08-13T00:54:46.382587342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5jvvg,Uid:cca490cd-f665-47f0-99fa-3a8f120d006f,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:46.412529 env[1434]: time="2025-08-13T00:54:46.412460227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:46.412714 env[1434]: time="2025-08-13T00:54:46.412498028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:46.412714 env[1434]: time="2025-08-13T00:54:46.412699231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:46.413108 env[1434]: time="2025-08-13T00:54:46.412974435Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0 pid=4182 runtime=io.containerd.runc.v2 Aug 13 00:54:46.425633 systemd[1]: Started cri-containerd-02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0.scope. Aug 13 00:54:46.456268 env[1434]: time="2025-08-13T00:54:46.456230839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5jvvg,Uid:cca490cd-f665-47f0-99fa-3a8f120d006f,Namespace:kube-system,Attempt:0,} returns sandbox id \"02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0\"" Aug 13 00:54:46.460097 env[1434]: time="2025-08-13T00:54:46.459417590Z" level=info msg="CreateContainer within sandbox \"02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:54:46.497617 env[1434]: time="2025-08-13T00:54:46.497571310Z" level=info msg="CreateContainer within sandbox \"02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4\"" Aug 13 00:54:46.498439 env[1434]: time="2025-08-13T00:54:46.498408624Z" level=info msg="StartContainer for \"65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4\"" Aug 13 00:54:46.516282 systemd[1]: Started cri-containerd-65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4.scope. Aug 13 00:54:46.528355 systemd[1]: cri-containerd-65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4.scope: Deactivated successfully. Aug 13 00:54:46.594745 env[1434]: time="2025-08-13T00:54:46.594687889Z" level=info msg="shim disconnected" id=65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4 Aug 13 00:54:46.594745 env[1434]: time="2025-08-13T00:54:46.594748890Z" level=warning msg="cleaning up after shim disconnected" id=65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4 namespace=k8s.io Aug 13 00:54:46.595146 env[1434]: time="2025-08-13T00:54:46.594760890Z" level=info msg="cleaning up dead shim" Aug 13 00:54:46.603208 env[1434]: time="2025-08-13T00:54:46.603167627Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4243 runtime=io.containerd.runc.v2\ntime=\"2025-08-13T00:54:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Aug 13 00:54:46.603550 env[1434]: time="2025-08-13T00:54:46.603443631Z" level=error msg="copy shim log" error="read /proc/self/fd/32: file already closed" Aug 13 00:54:46.607081 env[1434]: time="2025-08-13T00:54:46.607036190Z" level=error msg="Failed to pipe stderr of container \"65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4\"" error="reading from a closed fifo" Aug 13 00:54:46.607186 env[1434]: time="2025-08-13T00:54:46.607118191Z" level=error msg="Failed to pipe stdout of container \"65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4\"" error="reading from a closed fifo" Aug 13 00:54:46.611803 env[1434]: time="2025-08-13T00:54:46.611647964Z" level=error msg="StartContainer for \"65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Aug 13 00:54:46.612079 kubelet[2438]: E0813 00:54:46.612041 2438 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4" Aug 13 00:54:46.612265 kubelet[2438]: E0813 00:54:46.612232 2438 kuberuntime_manager.go:1274] "Unhandled Error" err=< Aug 13 00:54:46.612265 kubelet[2438]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Aug 13 00:54:46.612265 kubelet[2438]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Aug 13 00:54:46.612265 kubelet[2438]: rm /hostbin/cilium-mount Aug 13 00:54:46.612445 kubelet[2438]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwbj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-5jvvg_kube-system(cca490cd-f665-47f0-99fa-3a8f120d006f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Aug 13 00:54:46.612445 kubelet[2438]: > logger="UnhandledError" Aug 13 00:54:46.613719 kubelet[2438]: E0813 00:54:46.613676 2438 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5jvvg" podUID="cca490cd-f665-47f0-99fa-3a8f120d006f" Aug 13 00:54:46.844178 sshd[4168]: Accepted publickey for core from 10.200.16.10 port 56846 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:54:46.845627 sshd[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:46.850451 systemd-logind[1420]: New session 25 of user core. Aug 13 00:54:46.850961 systemd[1]: Started session-25.scope. Aug 13 00:54:47.001712 env[1434]: time="2025-08-13T00:54:47.001665203Z" level=info msg="CreateContainer within sandbox \"02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Aug 13 00:54:47.033340 env[1434]: time="2025-08-13T00:54:47.033296014Z" level=info msg="CreateContainer within sandbox \"02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"4c1df76373b7edb0989472b5b38790c3ea103449427c06173f083a4dcaca3032\"" Aug 13 00:54:47.033837 env[1434]: time="2025-08-13T00:54:47.033803323Z" level=info msg="StartContainer for \"4c1df76373b7edb0989472b5b38790c3ea103449427c06173f083a4dcaca3032\"" Aug 13 00:54:47.051214 systemd[1]: Started cri-containerd-4c1df76373b7edb0989472b5b38790c3ea103449427c06173f083a4dcaca3032.scope. Aug 13 00:54:47.063215 systemd[1]: cri-containerd-4c1df76373b7edb0989472b5b38790c3ea103449427c06173f083a4dcaca3032.scope: Deactivated successfully. Aug 13 00:54:47.081930 env[1434]: time="2025-08-13T00:54:47.081874199Z" level=info msg="shim disconnected" id=4c1df76373b7edb0989472b5b38790c3ea103449427c06173f083a4dcaca3032 Aug 13 00:54:47.082164 env[1434]: time="2025-08-13T00:54:47.081934200Z" level=warning msg="cleaning up after shim disconnected" id=4c1df76373b7edb0989472b5b38790c3ea103449427c06173f083a4dcaca3032 namespace=k8s.io Aug 13 00:54:47.082164 env[1434]: time="2025-08-13T00:54:47.081946801Z" level=info msg="cleaning up dead shim" Aug 13 00:54:47.089540 env[1434]: time="2025-08-13T00:54:47.089499523Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4281 runtime=io.containerd.runc.v2\ntime=\"2025-08-13T00:54:47Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4c1df76373b7edb0989472b5b38790c3ea103449427c06173f083a4dcaca3032/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Aug 13 00:54:47.089824 env[1434]: time="2025-08-13T00:54:47.089765127Z" level=error msg="copy shim log" error="read /proc/self/fd/32: file already closed" Aug 13 00:54:47.090106 env[1434]: time="2025-08-13T00:54:47.090060332Z" level=error msg="Failed to pipe stderr of container \"4c1df76373b7edb0989472b5b38790c3ea103449427c06173f083a4dcaca3032\"" error="reading from a closed fifo" Aug 13 00:54:47.090182 env[1434]: time="2025-08-13T00:54:47.090064732Z" level=error msg="Failed to pipe stdout of container \"4c1df76373b7edb0989472b5b38790c3ea103449427c06173f083a4dcaca3032\"" error="reading from a closed fifo" Aug 13 00:54:47.094772 env[1434]: time="2025-08-13T00:54:47.094167898Z" level=error msg="StartContainer for \"4c1df76373b7edb0989472b5b38790c3ea103449427c06173f083a4dcaca3032\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Aug 13 00:54:47.094869 kubelet[2438]: E0813 00:54:47.094425 2438 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4c1df76373b7edb0989472b5b38790c3ea103449427c06173f083a4dcaca3032" Aug 13 00:54:47.095234 kubelet[2438]: E0813 00:54:47.094898 2438 kuberuntime_manager.go:1274] "Unhandled Error" err=< Aug 13 00:54:47.095234 kubelet[2438]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Aug 13 00:54:47.095234 kubelet[2438]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Aug 13 00:54:47.095234 kubelet[2438]: rm /hostbin/cilium-mount Aug 13 00:54:47.095234 kubelet[2438]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwbj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-5jvvg_kube-system(cca490cd-f665-47f0-99fa-3a8f120d006f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Aug 13 00:54:47.095234 kubelet[2438]: > logger="UnhandledError" Aug 13 00:54:47.096424 kubelet[2438]: E0813 00:54:47.096394 2438 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5jvvg" podUID="cca490cd-f665-47f0-99fa-3a8f120d006f" Aug 13 00:54:47.344645 sshd[4168]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:47.347877 systemd[1]: sshd@22-10.200.4.36:22-10.200.16.10:56846.service: Deactivated successfully. Aug 13 00:54:47.348764 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:54:47.350368 systemd-logind[1420]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:54:47.351999 systemd-logind[1420]: Removed session 25. Aug 13 00:54:47.443563 systemd[1]: Started sshd@23-10.200.4.36:22-10.200.16.10:56858.service. Aug 13 00:54:48.000795 kubelet[2438]: I0813 00:54:48.000692 2438 scope.go:117] "RemoveContainer" containerID="65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4" Aug 13 00:54:48.001307 env[1434]: time="2025-08-13T00:54:48.001261057Z" level=info msg="StopPodSandbox for \"02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0\"" Aug 13 00:54:48.001868 env[1434]: time="2025-08-13T00:54:48.001819566Z" level=info msg="Container to stop \"65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:54:48.002021 env[1434]: time="2025-08-13T00:54:48.001979268Z" level=info msg="Container to stop \"4c1df76373b7edb0989472b5b38790c3ea103449427c06173f083a4dcaca3032\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:54:48.005457 env[1434]: time="2025-08-13T00:54:48.003638795Z" level=info msg="RemoveContainer for \"65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4\"" Aug 13 00:54:48.008792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0-shm.mount: Deactivated successfully. Aug 13 00:54:48.017518 systemd[1]: cri-containerd-02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0.scope: Deactivated successfully. Aug 13 00:54:48.019253 env[1434]: time="2025-08-13T00:54:48.019213645Z" level=info msg="RemoveContainer for \"65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4\" returns successfully" Aug 13 00:54:48.039045 sshd[4302]: Accepted publickey for core from 10.200.16.10 port 56858 ssh2: RSA SHA256:rG+KMDo131ToO+q3jk0DRYylimwUFMitj1EQgQc6PF0 Aug 13 00:54:48.041140 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:48.046773 systemd[1]: Started session-26.scope. Aug 13 00:54:48.047319 systemd-logind[1420]: New session 26 of user core. Aug 13 00:54:48.057197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0-rootfs.mount: Deactivated successfully. Aug 13 00:54:48.077898 env[1434]: time="2025-08-13T00:54:48.077844587Z" level=info msg="shim disconnected" id=02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0 Aug 13 00:54:48.078105 env[1434]: time="2025-08-13T00:54:48.077909388Z" level=warning msg="cleaning up after shim disconnected" id=02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0 namespace=k8s.io Aug 13 00:54:48.078105 env[1434]: time="2025-08-13T00:54:48.077922389Z" level=info msg="cleaning up dead shim" Aug 13 00:54:48.086462 env[1434]: time="2025-08-13T00:54:48.086428025Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4324 runtime=io.containerd.runc.v2\n" Aug 13 00:54:48.086767 env[1434]: time="2025-08-13T00:54:48.086732530Z" level=info msg="TearDown network for sandbox \"02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0\" successfully" Aug 13 00:54:48.086848 env[1434]: time="2025-08-13T00:54:48.086766531Z" level=info msg="StopPodSandbox for \"02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0\" returns successfully" Aug 13 00:54:48.227737 kubelet[2438]: I0813 00:54:48.227681 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-lib-modules\") pod \"cca490cd-f665-47f0-99fa-3a8f120d006f\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " Aug 13 00:54:48.227737 kubelet[2438]: I0813 00:54:48.227732 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-host-proc-sys-kernel\") pod \"cca490cd-f665-47f0-99fa-3a8f120d006f\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " Aug 13 00:54:48.228430 kubelet[2438]: I0813 00:54:48.227762 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-host-proc-sys-net\") pod \"cca490cd-f665-47f0-99fa-3a8f120d006f\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " Aug 13 00:54:48.228430 kubelet[2438]: I0813 00:54:48.227790 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-cilium-run\") pod \"cca490cd-f665-47f0-99fa-3a8f120d006f\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " Aug 13 00:54:48.228430 kubelet[2438]: I0813 00:54:48.227827 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cca490cd-f665-47f0-99fa-3a8f120d006f-clustermesh-secrets\") pod \"cca490cd-f665-47f0-99fa-3a8f120d006f\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " Aug 13 00:54:48.228430 kubelet[2438]: I0813 00:54:48.227856 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cca490cd-f665-47f0-99fa-3a8f120d006f-cilium-ipsec-secrets\") pod \"cca490cd-f665-47f0-99fa-3a8f120d006f\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " Aug 13 00:54:48.228430 kubelet[2438]: I0813 00:54:48.227889 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cca490cd-f665-47f0-99fa-3a8f120d006f-cilium-config-path\") pod \"cca490cd-f665-47f0-99fa-3a8f120d006f\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " Aug 13 00:54:48.228430 kubelet[2438]: I0813 00:54:48.227917 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-bpf-maps\") pod \"cca490cd-f665-47f0-99fa-3a8f120d006f\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " Aug 13 00:54:48.228430 kubelet[2438]: I0813 00:54:48.227952 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-xtables-lock\") pod \"cca490cd-f665-47f0-99fa-3a8f120d006f\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " Aug 13 00:54:48.228430 kubelet[2438]: I0813 00:54:48.227977 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-cni-path\") pod \"cca490cd-f665-47f0-99fa-3a8f120d006f\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " Aug 13 00:54:48.230986 kubelet[2438]: I0813 00:54:48.228953 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwbj2\" (UniqueName: \"kubernetes.io/projected/cca490cd-f665-47f0-99fa-3a8f120d006f-kube-api-access-qwbj2\") pod \"cca490cd-f665-47f0-99fa-3a8f120d006f\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " Aug 13 00:54:48.230986 kubelet[2438]: I0813 00:54:48.229042 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-etc-cni-netd\") pod \"cca490cd-f665-47f0-99fa-3a8f120d006f\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " Aug 13 00:54:48.230986 kubelet[2438]: I0813 00:54:48.229098 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-cilium-cgroup\") pod \"cca490cd-f665-47f0-99fa-3a8f120d006f\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " Aug 13 00:54:48.230986 kubelet[2438]: I0813 00:54:48.229148 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-hostproc\") pod \"cca490cd-f665-47f0-99fa-3a8f120d006f\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " Aug 13 00:54:48.230986 kubelet[2438]: I0813 00:54:48.229184 2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cca490cd-f665-47f0-99fa-3a8f120d006f-hubble-tls\") pod \"cca490cd-f665-47f0-99fa-3a8f120d006f\" (UID: \"cca490cd-f665-47f0-99fa-3a8f120d006f\") " Aug 13 00:54:48.230986 kubelet[2438]: I0813 00:54:48.229742 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cca490cd-f665-47f0-99fa-3a8f120d006f" (UID: "cca490cd-f665-47f0-99fa-3a8f120d006f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:48.230986 kubelet[2438]: I0813 00:54:48.229796 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cca490cd-f665-47f0-99fa-3a8f120d006f" (UID: "cca490cd-f665-47f0-99fa-3a8f120d006f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:48.230986 kubelet[2438]: I0813 00:54:48.229839 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cca490cd-f665-47f0-99fa-3a8f120d006f" (UID: "cca490cd-f665-47f0-99fa-3a8f120d006f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:48.230986 kubelet[2438]: I0813 00:54:48.229876 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cca490cd-f665-47f0-99fa-3a8f120d006f" (UID: "cca490cd-f665-47f0-99fa-3a8f120d006f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:48.234709 kubelet[2438]: I0813 00:54:48.234677 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cca490cd-f665-47f0-99fa-3a8f120d006f" (UID: "cca490cd-f665-47f0-99fa-3a8f120d006f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:48.234916 kubelet[2438]: I0813 00:54:48.234885 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cca490cd-f665-47f0-99fa-3a8f120d006f" (UID: "cca490cd-f665-47f0-99fa-3a8f120d006f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:48.235104 kubelet[2438]: I0813 00:54:48.235079 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-cni-path" (OuterVolumeSpecName: "cni-path") pod "cca490cd-f665-47f0-99fa-3a8f120d006f" (UID: "cca490cd-f665-47f0-99fa-3a8f120d006f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:48.235854 kubelet[2438]: I0813 00:54:48.235822 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cca490cd-f665-47f0-99fa-3a8f120d006f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cca490cd-f665-47f0-99fa-3a8f120d006f" (UID: "cca490cd-f665-47f0-99fa-3a8f120d006f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:54:48.235960 kubelet[2438]: I0813 00:54:48.235891 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cca490cd-f665-47f0-99fa-3a8f120d006f" (UID: "cca490cd-f665-47f0-99fa-3a8f120d006f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:48.235960 kubelet[2438]: I0813 00:54:48.235943 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cca490cd-f665-47f0-99fa-3a8f120d006f" (UID: "cca490cd-f665-47f0-99fa-3a8f120d006f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:48.236084 kubelet[2438]: I0813 00:54:48.235971 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-hostproc" (OuterVolumeSpecName: "hostproc") pod "cca490cd-f665-47f0-99fa-3a8f120d006f" (UID: "cca490cd-f665-47f0-99fa-3a8f120d006f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:54:48.241117 systemd[1]: var-lib-kubelet-pods-cca490cd\x2df665\x2d47f0\x2d99fa\x2d3a8f120d006f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 00:54:48.245779 systemd[1]: var-lib-kubelet-pods-cca490cd\x2df665\x2d47f0\x2d99fa\x2d3a8f120d006f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:54:48.247167 kubelet[2438]: I0813 00:54:48.247132 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cca490cd-f665-47f0-99fa-3a8f120d006f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "cca490cd-f665-47f0-99fa-3a8f120d006f" (UID: "cca490cd-f665-47f0-99fa-3a8f120d006f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:54:48.251039 kubelet[2438]: I0813 00:54:48.250911 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cca490cd-f665-47f0-99fa-3a8f120d006f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cca490cd-f665-47f0-99fa-3a8f120d006f" (UID: "cca490cd-f665-47f0-99fa-3a8f120d006f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:54:48.252813 kubelet[2438]: I0813 00:54:48.252782 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cca490cd-f665-47f0-99fa-3a8f120d006f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cca490cd-f665-47f0-99fa-3a8f120d006f" (UID: "cca490cd-f665-47f0-99fa-3a8f120d006f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:54:48.256140 kubelet[2438]: I0813 00:54:48.256111 2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cca490cd-f665-47f0-99fa-3a8f120d006f-kube-api-access-qwbj2" (OuterVolumeSpecName: "kube-api-access-qwbj2") pod "cca490cd-f665-47f0-99fa-3a8f120d006f" (UID: "cca490cd-f665-47f0-99fa-3a8f120d006f"). InnerVolumeSpecName "kube-api-access-qwbj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:54:48.330522 kubelet[2438]: I0813 00:54:48.330479 2438 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-host-proc-sys-kernel\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:48.330522 kubelet[2438]: I0813 00:54:48.330520 2438 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-host-proc-sys-net\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:48.330758 kubelet[2438]: I0813 00:54:48.330539 2438 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-lib-modules\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:48.330758 kubelet[2438]: I0813 00:54:48.330555 2438 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-cilium-run\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:48.330758 kubelet[2438]: I0813 00:54:48.330570 2438 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cca490cd-f665-47f0-99fa-3a8f120d006f-clustermesh-secrets\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:48.330758 kubelet[2438]: I0813 00:54:48.330582 2438 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cca490cd-f665-47f0-99fa-3a8f120d006f-cilium-ipsec-secrets\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:48.330758 kubelet[2438]: I0813 00:54:48.330597 2438 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cca490cd-f665-47f0-99fa-3a8f120d006f-cilium-config-path\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:48.330758 kubelet[2438]: I0813 00:54:48.330611 2438 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-bpf-maps\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:48.330758 kubelet[2438]: I0813 00:54:48.330626 2438 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-cni-path\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:48.330758 kubelet[2438]: I0813 00:54:48.330639 2438 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-xtables-lock\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:48.330758 kubelet[2438]: I0813 00:54:48.330661 2438 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwbj2\" (UniqueName: \"kubernetes.io/projected/cca490cd-f665-47f0-99fa-3a8f120d006f-kube-api-access-qwbj2\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:48.330758 kubelet[2438]: I0813 00:54:48.330674 2438 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-etc-cni-netd\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:48.330758 kubelet[2438]: I0813 00:54:48.330687 2438 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-cilium-cgroup\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:48.330758 kubelet[2438]: I0813 00:54:48.330700 2438 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cca490cd-f665-47f0-99fa-3a8f120d006f-hostproc\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:48.330758 kubelet[2438]: I0813 00:54:48.330714 2438 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cca490cd-f665-47f0-99fa-3a8f120d006f-hubble-tls\") on node \"ci-3510.3.8-a-09b422438d\" DevicePath \"\"" Aug 13 00:54:48.333440 systemd[1]: var-lib-kubelet-pods-cca490cd\x2df665\x2d47f0\x2d99fa\x2d3a8f120d006f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqwbj2.mount: Deactivated successfully. Aug 13 00:54:48.333586 systemd[1]: var-lib-kubelet-pods-cca490cd\x2df665\x2d47f0\x2d99fa\x2d3a8f120d006f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:54:48.478652 systemd[1]: Removed slice kubepods-burstable-podcca490cd_f665_47f0_99fa_3a8f120d006f.slice. Aug 13 00:54:49.004416 kubelet[2438]: I0813 00:54:49.004376 2438 scope.go:117] "RemoveContainer" containerID="4c1df76373b7edb0989472b5b38790c3ea103449427c06173f083a4dcaca3032" Aug 13 00:54:49.010324 env[1434]: time="2025-08-13T00:54:49.010285570Z" level=info msg="RemoveContainer for \"4c1df76373b7edb0989472b5b38790c3ea103449427c06173f083a4dcaca3032\"" Aug 13 00:54:49.031283 kubelet[2438]: I0813 00:54:49.031225 2438 setters.go:600] "Node became not ready" node="ci-3510.3.8-a-09b422438d" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:54:49Z","lastTransitionTime":"2025-08-13T00:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:54:49.036169 env[1434]: time="2025-08-13T00:54:49.036115182Z" level=info msg="RemoveContainer for \"4c1df76373b7edb0989472b5b38790c3ea103449427c06173f083a4dcaca3032\" returns successfully" Aug 13 00:54:49.053519 kubelet[2438]: E0813 00:54:49.053482 2438 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cca490cd-f665-47f0-99fa-3a8f120d006f" containerName="mount-cgroup" Aug 13 00:54:49.053822 kubelet[2438]: I0813 00:54:49.053787 2438 memory_manager.go:354] "RemoveStaleState removing state" podUID="cca490cd-f665-47f0-99fa-3a8f120d006f" containerName="mount-cgroup" Aug 13 00:54:49.053950 kubelet[2438]: I0813 00:54:49.053936 2438 memory_manager.go:354] "RemoveStaleState removing state" podUID="cca490cd-f665-47f0-99fa-3a8f120d006f" containerName="mount-cgroup" Aug 13 00:54:49.054079 kubelet[2438]: E0813 00:54:49.054065 2438 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cca490cd-f665-47f0-99fa-3a8f120d006f" containerName="mount-cgroup" Aug 13 00:54:49.061832 systemd[1]: Created slice kubepods-burstable-poda1feec12_89c4_42af_a67e_c599eb0ac629.slice. Aug 13 00:54:49.134616 kubelet[2438]: I0813 00:54:49.134556 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1feec12-89c4-42af-a67e-c599eb0ac629-cilium-run\") pod \"cilium-p2cht\" (UID: \"a1feec12-89c4-42af-a67e-c599eb0ac629\") " pod="kube-system/cilium-p2cht" Aug 13 00:54:49.134616 kubelet[2438]: I0813 00:54:49.134605 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1feec12-89c4-42af-a67e-c599eb0ac629-lib-modules\") pod \"cilium-p2cht\" (UID: \"a1feec12-89c4-42af-a67e-c599eb0ac629\") " pod="kube-system/cilium-p2cht" Aug 13 00:54:49.134900 kubelet[2438]: I0813 00:54:49.134633 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1feec12-89c4-42af-a67e-c599eb0ac629-hubble-tls\") pod \"cilium-p2cht\" (UID: \"a1feec12-89c4-42af-a67e-c599eb0ac629\") " pod="kube-system/cilium-p2cht" Aug 13 00:54:49.134900 kubelet[2438]: I0813 00:54:49.134660 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1feec12-89c4-42af-a67e-c599eb0ac629-hostproc\") pod \"cilium-p2cht\" (UID: \"a1feec12-89c4-42af-a67e-c599eb0ac629\") " pod="kube-system/cilium-p2cht" Aug 13 00:54:49.134900 kubelet[2438]: I0813 00:54:49.134683 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1feec12-89c4-42af-a67e-c599eb0ac629-xtables-lock\") pod \"cilium-p2cht\" (UID: \"a1feec12-89c4-42af-a67e-c599eb0ac629\") " pod="kube-system/cilium-p2cht" Aug 13 00:54:49.134900 kubelet[2438]: I0813 00:54:49.134706 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a1feec12-89c4-42af-a67e-c599eb0ac629-cilium-ipsec-secrets\") pod \"cilium-p2cht\" (UID: \"a1feec12-89c4-42af-a67e-c599eb0ac629\") " pod="kube-system/cilium-p2cht" Aug 13 00:54:49.134900 kubelet[2438]: I0813 00:54:49.134730 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1feec12-89c4-42af-a67e-c599eb0ac629-cilium-config-path\") pod \"cilium-p2cht\" (UID: \"a1feec12-89c4-42af-a67e-c599eb0ac629\") " pod="kube-system/cilium-p2cht" Aug 13 00:54:49.134900 kubelet[2438]: I0813 00:54:49.134755 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1feec12-89c4-42af-a67e-c599eb0ac629-bpf-maps\") pod \"cilium-p2cht\" (UID: \"a1feec12-89c4-42af-a67e-c599eb0ac629\") " pod="kube-system/cilium-p2cht" Aug 13 00:54:49.134900 kubelet[2438]: I0813 00:54:49.134778 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1feec12-89c4-42af-a67e-c599eb0ac629-etc-cni-netd\") pod \"cilium-p2cht\" (UID: \"a1feec12-89c4-42af-a67e-c599eb0ac629\") " pod="kube-system/cilium-p2cht" Aug 13 00:54:49.134900 kubelet[2438]: I0813 00:54:49.134807 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1feec12-89c4-42af-a67e-c599eb0ac629-host-proc-sys-net\") pod \"cilium-p2cht\" (UID: \"a1feec12-89c4-42af-a67e-c599eb0ac629\") " pod="kube-system/cilium-p2cht" Aug 13 00:54:49.134900 kubelet[2438]: I0813 00:54:49.134830 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1feec12-89c4-42af-a67e-c599eb0ac629-cilium-cgroup\") pod \"cilium-p2cht\" (UID: \"a1feec12-89c4-42af-a67e-c599eb0ac629\") " pod="kube-system/cilium-p2cht" Aug 13 00:54:49.134900 kubelet[2438]: I0813 00:54:49.134858 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1feec12-89c4-42af-a67e-c599eb0ac629-cni-path\") pod \"cilium-p2cht\" (UID: \"a1feec12-89c4-42af-a67e-c599eb0ac629\") " pod="kube-system/cilium-p2cht" Aug 13 00:54:49.134900 kubelet[2438]: I0813 00:54:49.134887 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1feec12-89c4-42af-a67e-c599eb0ac629-host-proc-sys-kernel\") pod \"cilium-p2cht\" (UID: \"a1feec12-89c4-42af-a67e-c599eb0ac629\") " pod="kube-system/cilium-p2cht" Aug 13 00:54:49.135427 kubelet[2438]: I0813 00:54:49.134918 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz56c\" (UniqueName: \"kubernetes.io/projected/a1feec12-89c4-42af-a67e-c599eb0ac629-kube-api-access-xz56c\") pod \"cilium-p2cht\" (UID: \"a1feec12-89c4-42af-a67e-c599eb0ac629\") " pod="kube-system/cilium-p2cht" Aug 13 00:54:49.135427 kubelet[2438]: I0813 00:54:49.134952 2438 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1feec12-89c4-42af-a67e-c599eb0ac629-clustermesh-secrets\") pod \"cilium-p2cht\" (UID: \"a1feec12-89c4-42af-a67e-c599eb0ac629\") " pod="kube-system/cilium-p2cht" Aug 13 00:54:49.365849 env[1434]: time="2025-08-13T00:54:49.365791850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p2cht,Uid:a1feec12-89c4-42af-a67e-c599eb0ac629,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:49.398116 env[1434]: time="2025-08-13T00:54:49.398040966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:49.398116 env[1434]: time="2025-08-13T00:54:49.398078166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:49.398116 env[1434]: time="2025-08-13T00:54:49.398092266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:49.399195 env[1434]: time="2025-08-13T00:54:49.398485473Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b77610494139cd85053e19ad26df06a4ceff4d9fa953f2824492a6d943d8aa2c pid=4358 runtime=io.containerd.runc.v2 Aug 13 00:54:49.417747 systemd[1]: run-containerd-runc-k8s.io-b77610494139cd85053e19ad26df06a4ceff4d9fa953f2824492a6d943d8aa2c-runc.JqkNIl.mount: Deactivated successfully. Aug 13 00:54:49.424028 systemd[1]: Started cri-containerd-b77610494139cd85053e19ad26df06a4ceff4d9fa953f2824492a6d943d8aa2c.scope. Aug 13 00:54:49.447169 env[1434]: time="2025-08-13T00:54:49.447116850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p2cht,Uid:a1feec12-89c4-42af-a67e-c599eb0ac629,Namespace:kube-system,Attempt:0,} returns sandbox id \"b77610494139cd85053e19ad26df06a4ceff4d9fa953f2824492a6d943d8aa2c\"" Aug 13 00:54:49.451083 env[1434]: time="2025-08-13T00:54:49.451024212Z" level=info msg="CreateContainer within sandbox \"b77610494139cd85053e19ad26df06a4ceff4d9fa953f2824492a6d943d8aa2c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:54:49.481813 env[1434]: time="2025-08-13T00:54:49.481744903Z" level=info msg="CreateContainer within sandbox \"b77610494139cd85053e19ad26df06a4ceff4d9fa953f2824492a6d943d8aa2c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"be53c71c4fd6e9714371dca80780207b1df56b8001ce84b03e7414ae1eecd58e\"" Aug 13 00:54:49.482372 env[1434]: time="2025-08-13T00:54:49.482343613Z" level=info msg="StartContainer for \"be53c71c4fd6e9714371dca80780207b1df56b8001ce84b03e7414ae1eecd58e\"" Aug 13 00:54:49.498900 systemd[1]: Started cri-containerd-be53c71c4fd6e9714371dca80780207b1df56b8001ce84b03e7414ae1eecd58e.scope. Aug 13 00:54:49.531012 env[1434]: time="2025-08-13T00:54:49.529829571Z" level=info msg="StartContainer for \"be53c71c4fd6e9714371dca80780207b1df56b8001ce84b03e7414ae1eecd58e\" returns successfully" Aug 13 00:54:49.539370 systemd[1]: cri-containerd-be53c71c4fd6e9714371dca80780207b1df56b8001ce84b03e7414ae1eecd58e.scope: Deactivated successfully. Aug 13 00:54:49.575224 kubelet[2438]: E0813 00:54:49.575187 2438 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:54:49.578537 env[1434]: time="2025-08-13T00:54:49.578485549Z" level=info msg="shim disconnected" id=be53c71c4fd6e9714371dca80780207b1df56b8001ce84b03e7414ae1eecd58e Aug 13 00:54:49.578537 env[1434]: time="2025-08-13T00:54:49.578535450Z" level=warning msg="cleaning up after shim disconnected" id=be53c71c4fd6e9714371dca80780207b1df56b8001ce84b03e7414ae1eecd58e namespace=k8s.io Aug 13 00:54:49.578726 env[1434]: time="2025-08-13T00:54:49.578547350Z" level=info msg="cleaning up dead shim" Aug 13 00:54:49.586609 env[1434]: time="2025-08-13T00:54:49.586574478Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4441 runtime=io.containerd.runc.v2\n" Aug 13 00:54:49.703660 kubelet[2438]: W0813 00:54:49.703246 2438 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcca490cd_f665_47f0_99fa_3a8f120d006f.slice/cri-containerd-65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4.scope WatchSource:0}: container "65648d6d62745a60475150a7707caa8be788367cc3535c8f2e2e631cfaffdcd4" in namespace "k8s.io": not found Aug 13 00:54:50.011172 env[1434]: time="2025-08-13T00:54:50.011035960Z" level=info msg="CreateContainer within sandbox \"b77610494139cd85053e19ad26df06a4ceff4d9fa953f2824492a6d943d8aa2c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:54:50.037472 env[1434]: time="2025-08-13T00:54:50.037422079Z" level=info msg="CreateContainer within sandbox \"b77610494139cd85053e19ad26df06a4ceff4d9fa953f2824492a6d943d8aa2c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"92fbe64c16df7f06fd1b563b4a0be8d25b92d529f59e9b9cd4c7b2e950e6ca34\"" Aug 13 00:54:50.038107 env[1434]: time="2025-08-13T00:54:50.038068890Z" level=info msg="StartContainer for \"92fbe64c16df7f06fd1b563b4a0be8d25b92d529f59e9b9cd4c7b2e950e6ca34\"" Aug 13 00:54:50.058600 systemd[1]: Started cri-containerd-92fbe64c16df7f06fd1b563b4a0be8d25b92d529f59e9b9cd4c7b2e950e6ca34.scope. Aug 13 00:54:50.099164 systemd[1]: cri-containerd-92fbe64c16df7f06fd1b563b4a0be8d25b92d529f59e9b9cd4c7b2e950e6ca34.scope: Deactivated successfully. Aug 13 00:54:50.100034 env[1434]: time="2025-08-13T00:54:50.099953373Z" level=info msg="StartContainer for \"92fbe64c16df7f06fd1b563b4a0be8d25b92d529f59e9b9cd4c7b2e950e6ca34\" returns successfully" Aug 13 00:54:50.130920 env[1434]: time="2025-08-13T00:54:50.130870464Z" level=info msg="shim disconnected" id=92fbe64c16df7f06fd1b563b4a0be8d25b92d529f59e9b9cd4c7b2e950e6ca34 Aug 13 00:54:50.130920 env[1434]: time="2025-08-13T00:54:50.130921165Z" level=warning msg="cleaning up after shim disconnected" id=92fbe64c16df7f06fd1b563b4a0be8d25b92d529f59e9b9cd4c7b2e950e6ca34 namespace=k8s.io Aug 13 00:54:50.131331 env[1434]: time="2025-08-13T00:54:50.130933965Z" level=info msg="cleaning up dead shim" Aug 13 00:54:50.138665 env[1434]: time="2025-08-13T00:54:50.138628187Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4500 runtime=io.containerd.runc.v2\n" Aug 13 00:54:50.388848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791894037.mount: Deactivated successfully. Aug 13 00:54:50.475422 kubelet[2438]: I0813 00:54:50.475375 2438 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cca490cd-f665-47f0-99fa-3a8f120d006f" path="/var/lib/kubelet/pods/cca490cd-f665-47f0-99fa-3a8f120d006f/volumes" Aug 13 00:54:51.015914 env[1434]: time="2025-08-13T00:54:51.015848326Z" level=info msg="CreateContainer within sandbox \"b77610494139cd85053e19ad26df06a4ceff4d9fa953f2824492a6d943d8aa2c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:54:51.055414 env[1434]: time="2025-08-13T00:54:51.055368350Z" level=info msg="CreateContainer within sandbox \"b77610494139cd85053e19ad26df06a4ceff4d9fa953f2824492a6d943d8aa2c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b973efb3a1d248b8cf03fdaa942d0715dcf22fc7f285a77dedf042413b21cc62\"" Aug 13 00:54:51.056113 env[1434]: time="2025-08-13T00:54:51.056071062Z" level=info msg="StartContainer for \"b973efb3a1d248b8cf03fdaa942d0715dcf22fc7f285a77dedf042413b21cc62\"" Aug 13 00:54:51.082841 systemd[1]: Started cri-containerd-b973efb3a1d248b8cf03fdaa942d0715dcf22fc7f285a77dedf042413b21cc62.scope. Aug 13 00:54:51.113490 systemd[1]: cri-containerd-b973efb3a1d248b8cf03fdaa942d0715dcf22fc7f285a77dedf042413b21cc62.scope: Deactivated successfully. Aug 13 00:54:51.117520 env[1434]: time="2025-08-13T00:54:51.117475532Z" level=info msg="StartContainer for \"b973efb3a1d248b8cf03fdaa942d0715dcf22fc7f285a77dedf042413b21cc62\" returns successfully" Aug 13 00:54:51.147181 env[1434]: time="2025-08-13T00:54:51.147131301Z" level=info msg="shim disconnected" id=b973efb3a1d248b8cf03fdaa942d0715dcf22fc7f285a77dedf042413b21cc62 Aug 13 00:54:51.147181 env[1434]: time="2025-08-13T00:54:51.147181701Z" level=warning msg="cleaning up after shim disconnected" id=b973efb3a1d248b8cf03fdaa942d0715dcf22fc7f285a77dedf042413b21cc62 namespace=k8s.io Aug 13 00:54:51.147491 env[1434]: time="2025-08-13T00:54:51.147192302Z" level=info msg="cleaning up dead shim" Aug 13 00:54:51.156474 env[1434]: time="2025-08-13T00:54:51.156430648Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4561 runtime=io.containerd.runc.v2\ntime=\"2025-08-13T00:54:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Aug 13 00:54:51.389261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b973efb3a1d248b8cf03fdaa942d0715dcf22fc7f285a77dedf042413b21cc62-rootfs.mount: Deactivated successfully. Aug 13 00:54:52.033940 env[1434]: time="2025-08-13T00:54:52.033886512Z" level=info msg="CreateContainer within sandbox \"b77610494139cd85053e19ad26df06a4ceff4d9fa953f2824492a6d943d8aa2c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:54:52.071703 env[1434]: time="2025-08-13T00:54:52.071653705Z" level=info msg="CreateContainer within sandbox \"b77610494139cd85053e19ad26df06a4ceff4d9fa953f2824492a6d943d8aa2c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1bd75fb215367bbfef8b892c2a99f75701ef19c195bc64c0a1a2a6b289923afb\"" Aug 13 00:54:52.072493 env[1434]: time="2025-08-13T00:54:52.072457118Z" level=info msg="StartContainer for \"1bd75fb215367bbfef8b892c2a99f75701ef19c195bc64c0a1a2a6b289923afb\"" Aug 13 00:54:52.098822 systemd[1]: Started cri-containerd-1bd75fb215367bbfef8b892c2a99f75701ef19c195bc64c0a1a2a6b289923afb.scope. Aug 13 00:54:52.124476 systemd[1]: cri-containerd-1bd75fb215367bbfef8b892c2a99f75701ef19c195bc64c0a1a2a6b289923afb.scope: Deactivated successfully. Aug 13 00:54:52.128284 env[1434]: time="2025-08-13T00:54:52.128244095Z" level=info msg="StartContainer for \"1bd75fb215367bbfef8b892c2a99f75701ef19c195bc64c0a1a2a6b289923afb\" returns successfully" Aug 13 00:54:52.156606 env[1434]: time="2025-08-13T00:54:52.156550840Z" level=info msg="shim disconnected" id=1bd75fb215367bbfef8b892c2a99f75701ef19c195bc64c0a1a2a6b289923afb Aug 13 00:54:52.156606 env[1434]: time="2025-08-13T00:54:52.156601740Z" level=warning msg="cleaning up after shim disconnected" id=1bd75fb215367bbfef8b892c2a99f75701ef19c195bc64c0a1a2a6b289923afb namespace=k8s.io Aug 13 00:54:52.156606 env[1434]: time="2025-08-13T00:54:52.156613141Z" level=info msg="cleaning up dead shim" Aug 13 00:54:52.164241 env[1434]: time="2025-08-13T00:54:52.164201960Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4615 runtime=io.containerd.runc.v2\n" Aug 13 00:54:52.389629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bd75fb215367bbfef8b892c2a99f75701ef19c195bc64c0a1a2a6b289923afb-rootfs.mount: Deactivated successfully. Aug 13 00:54:52.815866 kubelet[2438]: W0813 00:54:52.815816 2438 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1feec12_89c4_42af_a67e_c599eb0ac629.slice/cri-containerd-be53c71c4fd6e9714371dca80780207b1df56b8001ce84b03e7414ae1eecd58e.scope WatchSource:0}: task be53c71c4fd6e9714371dca80780207b1df56b8001ce84b03e7414ae1eecd58e not found: not found Aug 13 00:54:53.025460 env[1434]: time="2025-08-13T00:54:53.025408394Z" level=info msg="CreateContainer within sandbox \"b77610494139cd85053e19ad26df06a4ceff4d9fa953f2824492a6d943d8aa2c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:54:53.065802 env[1434]: time="2025-08-13T00:54:53.065752225Z" level=info msg="CreateContainer within sandbox \"b77610494139cd85053e19ad26df06a4ceff4d9fa953f2824492a6d943d8aa2c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"013119c3606fe886615871a3a5edf86cfe635691499140e389c7adc3d772d89e\"" Aug 13 00:54:53.066772 env[1434]: time="2025-08-13T00:54:53.066514137Z" level=info msg="StartContainer for \"013119c3606fe886615871a3a5edf86cfe635691499140e389c7adc3d772d89e\"" Aug 13 00:54:53.093029 systemd[1]: Started cri-containerd-013119c3606fe886615871a3a5edf86cfe635691499140e389c7adc3d772d89e.scope. Aug 13 00:54:53.126236 env[1434]: time="2025-08-13T00:54:53.126173870Z" level=info msg="StartContainer for \"013119c3606fe886615871a3a5edf86cfe635691499140e389c7adc3d772d89e\" returns successfully" Aug 13 00:54:53.686028 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 00:54:54.534802 systemd[1]: run-containerd-runc-k8s.io-013119c3606fe886615871a3a5edf86cfe635691499140e389c7adc3d772d89e-runc.gKUQkq.mount: Deactivated successfully. Aug 13 00:54:55.924647 kubelet[2438]: W0813 00:54:55.924600 2438 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1feec12_89c4_42af_a67e_c599eb0ac629.slice/cri-containerd-92fbe64c16df7f06fd1b563b4a0be8d25b92d529f59e9b9cd4c7b2e950e6ca34.scope WatchSource:0}: task 92fbe64c16df7f06fd1b563b4a0be8d25b92d529f59e9b9cd4c7b2e950e6ca34 not found: not found Aug 13 00:54:56.412192 systemd-networkd[1597]: lxc_health: Link UP Aug 13 00:54:56.443023 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:54:56.444021 systemd-networkd[1597]: lxc_health: Gained carrier Aug 13 00:54:56.762802 systemd[1]: run-containerd-runc-k8s.io-013119c3606fe886615871a3a5edf86cfe635691499140e389c7adc3d772d89e-runc.dpE8to.mount: Deactivated successfully. Aug 13 00:54:57.395716 kubelet[2438]: I0813 00:54:57.395131 2438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p2cht" podStartSLOduration=8.395112387 podStartE2EDuration="8.395112387s" podCreationTimestamp="2025-08-13 00:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:54.048007177 +0000 UTC m=+229.707912508" watchObservedRunningTime="2025-08-13 00:54:57.395112387 +0000 UTC m=+233.055017718" Aug 13 00:54:57.653191 systemd-networkd[1597]: lxc_health: Gained IPv6LL Aug 13 00:54:59.006918 systemd[1]: run-containerd-runc-k8s.io-013119c3606fe886615871a3a5edf86cfe635691499140e389c7adc3d772d89e-runc.IQLCRr.mount: Deactivated successfully. Aug 13 00:54:59.035324 kubelet[2438]: W0813 00:54:59.035072 2438 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1feec12_89c4_42af_a67e_c599eb0ac629.slice/cri-containerd-b973efb3a1d248b8cf03fdaa942d0715dcf22fc7f285a77dedf042413b21cc62.scope WatchSource:0}: task b973efb3a1d248b8cf03fdaa942d0715dcf22fc7f285a77dedf042413b21cc62 not found: not found Aug 13 00:55:01.168860 systemd[1]: run-containerd-runc-k8s.io-013119c3606fe886615871a3a5edf86cfe635691499140e389c7adc3d772d89e-runc.MgaqvG.mount: Deactivated successfully. Aug 13 00:55:02.144083 kubelet[2438]: W0813 00:55:02.144025 2438 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1feec12_89c4_42af_a67e_c599eb0ac629.slice/cri-containerd-1bd75fb215367bbfef8b892c2a99f75701ef19c195bc64c0a1a2a6b289923afb.scope WatchSource:0}: task 1bd75fb215367bbfef8b892c2a99f75701ef19c195bc64c0a1a2a6b289923afb not found: not found Aug 13 00:55:03.469952 sshd[4302]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:03.473502 systemd[1]: sshd@23-10.200.4.36:22-10.200.16.10:56858.service: Deactivated successfully. Aug 13 00:55:03.474666 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:55:03.475687 systemd-logind[1420]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:55:03.476769 systemd-logind[1420]: Removed session 26. Aug 13 00:55:04.460051 env[1434]: time="2025-08-13T00:55:04.459974710Z" level=info msg="StopPodSandbox for \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\"" Aug 13 00:55:04.460600 env[1434]: time="2025-08-13T00:55:04.460117012Z" level=info msg="TearDown network for sandbox \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\" successfully" Aug 13 00:55:04.460600 env[1434]: time="2025-08-13T00:55:04.460172413Z" level=info msg="StopPodSandbox for \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\" returns successfully" Aug 13 00:55:04.460757 env[1434]: time="2025-08-13T00:55:04.460649320Z" level=info msg="RemovePodSandbox for \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\"" Aug 13 00:55:04.460757 env[1434]: time="2025-08-13T00:55:04.460691721Z" level=info msg="Forcibly stopping sandbox \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\"" Aug 13 00:55:04.460879 env[1434]: time="2025-08-13T00:55:04.460805622Z" level=info msg="TearDown network for sandbox \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\" successfully" Aug 13 00:55:04.471347 env[1434]: time="2025-08-13T00:55:04.471307178Z" level=info msg="RemovePodSandbox \"5451e56ca872c0ad9becd5d7747b61963167bf30e87e2427c1dc261647c6c48f\" returns successfully" Aug 13 00:55:04.473085 env[1434]: time="2025-08-13T00:55:04.473050304Z" level=info msg="StopPodSandbox for \"02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0\"" Aug 13 00:55:04.473189 env[1434]: time="2025-08-13T00:55:04.473138705Z" level=info msg="TearDown network for sandbox \"02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0\" successfully" Aug 13 00:55:04.473189 env[1434]: time="2025-08-13T00:55:04.473179105Z" level=info msg="StopPodSandbox for \"02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0\" returns successfully" Aug 13 00:55:04.473758 env[1434]: time="2025-08-13T00:55:04.473726014Z" level=info msg="RemovePodSandbox for \"02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0\"" Aug 13 00:55:04.473860 env[1434]: time="2025-08-13T00:55:04.473774014Z" level=info msg="Forcibly stopping sandbox \"02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0\"" Aug 13 00:55:04.473914 env[1434]: time="2025-08-13T00:55:04.473855615Z" level=info msg="TearDown network for sandbox \"02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0\" successfully" Aug 13 00:55:04.480898 env[1434]: time="2025-08-13T00:55:04.480869919Z" level=info msg="RemovePodSandbox \"02e3818e4dafbc588127ca03de9c5227c286587f7760747787189d475be8ade0\" returns successfully" Aug 13 00:55:04.481383 env[1434]: time="2025-08-13T00:55:04.481228124Z" level=info msg="StopPodSandbox for \"c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa\"" Aug 13 00:55:04.481383 env[1434]: time="2025-08-13T00:55:04.481307526Z" level=info msg="TearDown network for sandbox \"c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa\" successfully" Aug 13 00:55:04.481383 env[1434]: time="2025-08-13T00:55:04.481335326Z" level=info msg="StopPodSandbox for \"c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa\" returns successfully" Aug 13 00:55:04.481657 env[1434]: time="2025-08-13T00:55:04.481591230Z" level=info msg="RemovePodSandbox for \"c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa\"" Aug 13 00:55:04.481737 env[1434]: time="2025-08-13T00:55:04.481674531Z" level=info msg="Forcibly stopping sandbox \"c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa\"" Aug 13 00:55:04.481787 env[1434]: time="2025-08-13T00:55:04.481753232Z" level=info msg="TearDown network for sandbox \"c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa\" successfully" Aug 13 00:55:04.490306 env[1434]: time="2025-08-13T00:55:04.490277258Z" level=info msg="RemovePodSandbox \"c87443154d3d907324a20352916de0f73ec72a375f5fc1fa1bb6f5eba426a5aa\" returns successfully"