Sep 13 00:50:21.063246 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:50:21.063271 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:50:21.063281 kernel: BIOS-provided physical RAM map: Sep 13 00:50:21.063291 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:50:21.063296 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 13 00:50:21.063301 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 13 00:50:21.063311 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc4fff] reserved Sep 13 00:50:21.063327 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd1fff] usable Sep 13 00:50:21.063333 kernel: BIOS-e820: [mem 0x000000003ffd2000-0x000000003fffafff] ACPI data Sep 13 00:50:21.063341 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 13 00:50:21.063352 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 13 00:50:21.063357 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 13 00:50:21.063363 kernel: printk: bootconsole [earlyser0] enabled Sep 13 00:50:21.063371 kernel: NX (Execute Disable) protection: active Sep 13 00:50:21.063381 kernel: efi: EFI v2.70 by Microsoft Sep 13 00:50:21.063389 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f340a98 RNG=0x3ffd2018 Sep 13 00:50:21.063397 kernel: random: crng init done Sep 13 00:50:21.063403 kernel: SMBIOS 3.1.0 present. Sep 13 00:50:21.063409 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 13 00:50:21.063417 kernel: Hypervisor detected: Microsoft Hyper-V Sep 13 00:50:21.063432 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 13 00:50:21.063449 kernel: Hyper-V Host Build:26100-10.0-1-0.1293 Sep 13 00:50:21.063459 kernel: Hyper-V: Nested features: 0x1e0101 Sep 13 00:50:21.063469 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 13 00:50:21.063476 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 13 00:50:21.063482 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 13 00:50:21.063488 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 13 00:50:21.063499 kernel: tsc: Detected 2793.437 MHz processor Sep 13 00:50:21.063505 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:50:21.063511 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:50:21.063520 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 13 00:50:21.063552 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:50:21.063561 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 13 00:50:21.063570 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 13 00:50:21.063577 kernel: Using GB pages for direct mapping Sep 13 00:50:21.063583 kernel: Secure boot disabled Sep 13 00:50:21.063590 kernel: ACPI: Early table checksum verification disabled Sep 13 00:50:21.063601 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 13 00:50:21.063607 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:50:21.063612 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:50:21.063622 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 13 00:50:21.063630 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 13 00:50:21.063642 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:50:21.063649 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:50:21.063657 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:50:21.063666 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:50:21.063674 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:50:21.063679 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:50:21.063688 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 13 00:50:21.063703 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] Sep 13 00:50:21.063713 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 13 00:50:21.063721 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 13 00:50:21.063727 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 13 00:50:21.063737 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 13 00:50:21.063745 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 13 00:50:21.063756 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 13 00:50:21.063766 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 13 00:50:21.063772 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:50:21.063777 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:50:21.063784 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 13 00:50:21.063795 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 13 00:50:21.063801 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 13 00:50:21.063809 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 13 00:50:21.063819 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 13 00:50:21.063827 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 13 00:50:21.063835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 13 00:50:21.063843 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 13 00:50:21.063850 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 13 00:50:21.063860 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 13 00:50:21.063868 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 13 00:50:21.063874 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 13 00:50:21.063882 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 13 00:50:21.063901 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 13 00:50:21.063916 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 13 00:50:21.063924 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 13 00:50:21.063933 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 13 00:50:21.063942 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 13 00:50:21.063947 kernel: Zone ranges: Sep 13 00:50:21.063955 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:50:21.063966 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 13 00:50:21.063971 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 13 00:50:21.063979 kernel: Movable zone start for each node Sep 13 00:50:21.063997 kernel: Early memory node ranges Sep 13 00:50:21.064007 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:50:21.064021 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 13 00:50:21.064026 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd1fff] Sep 13 00:50:21.064035 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 13 00:50:21.064052 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 13 00:50:21.064067 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 13 00:50:21.064075 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:50:21.064082 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:50:21.064088 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges Sep 13 00:50:21.064094 kernel: On node 0, zone DMA32: 45 pages in unavailable ranges Sep 13 00:50:21.064110 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 13 00:50:21.064119 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 13 00:50:21.064132 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:50:21.064138 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:50:21.064145 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:50:21.064154 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 13 00:50:21.064162 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:50:21.064170 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 13 00:50:21.064178 kernel: Booting paravirtualized kernel on Hyper-V Sep 13 00:50:21.064184 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:50:21.064190 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:50:21.064199 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 13 00:50:21.064207 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 13 00:50:21.064213 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:50:21.064224 kernel: Hyper-V: PV spinlocks enabled Sep 13 00:50:21.064233 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:50:21.064240 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062375 Sep 13 00:50:21.064257 kernel: Policy zone: Normal Sep 13 00:50:21.064269 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:50:21.064283 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:50:21.064291 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 13 00:50:21.064305 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:50:21.064312 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:50:21.064329 kernel: Memory: 8069176K/8387512K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 318076K reserved, 0K cma-reserved) Sep 13 00:50:21.064335 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:50:21.064352 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:50:21.064362 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:50:21.064371 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:50:21.064378 kernel: rcu: RCU event tracing is enabled. Sep 13 00:50:21.064386 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:50:21.064401 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:50:21.064421 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:50:21.064430 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:50:21.064444 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:50:21.064451 kernel: Using NULL legacy PIC Sep 13 00:50:21.064462 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 13 00:50:21.064473 kernel: Console: colour dummy device 80x25 Sep 13 00:50:21.064479 kernel: printk: console [tty1] enabled Sep 13 00:50:21.064489 kernel: printk: console [ttyS0] enabled Sep 13 00:50:21.064496 kernel: printk: bootconsole [earlyser0] disabled Sep 13 00:50:21.064507 kernel: ACPI: Core revision 20210730 Sep 13 00:50:21.064516 kernel: Failed to register legacy timer interrupt Sep 13 00:50:21.064522 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:50:21.064537 kernel: Hyper-V: Using IPI hypercalls Sep 13 00:50:21.064547 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5586.87 BogoMIPS (lpj=2793437) Sep 13 00:50:21.064554 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:50:21.064560 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 13 00:50:21.064569 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 13 00:50:21.064577 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:50:21.064584 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:50:21.064593 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:50:21.064607 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 13 00:50:21.064617 kernel: RETBleed: Vulnerable Sep 13 00:50:21.064629 kernel: Speculative Store Bypass: Vulnerable Sep 13 00:50:21.064635 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:50:21.064642 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:50:21.064653 kernel: active return thunk: its_return_thunk Sep 13 00:50:21.064659 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:50:21.064668 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:50:21.064678 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:50:21.064685 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:50:21.064692 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 13 00:50:21.064701 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 13 00:50:21.064709 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 13 00:50:21.064715 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:50:21.064721 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 13 00:50:21.064731 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 13 00:50:21.064739 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 13 00:50:21.064745 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 13 00:50:21.064754 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:50:21.064762 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:50:21.064770 kernel: LSM: Security Framework initializing Sep 13 00:50:21.064784 kernel: SELinux: Initializing. Sep 13 00:50:21.064791 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:50:21.064797 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:50:21.064807 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Sep 13 00:50:21.064814 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Sep 13 00:50:21.064822 kernel: signal: max sigframe size: 3632 Sep 13 00:50:21.064832 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:50:21.064838 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:50:21.064844 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:50:21.064862 kernel: x86: Booting SMP configuration: Sep 13 00:50:21.064870 kernel: .... node #0, CPUs: #1 Sep 13 00:50:21.064877 kernel: Transient Scheduler Attacks: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 13 00:50:21.064888 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:50:21.064894 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:50:21.064900 kernel: smpboot: Max logical packages: 1 Sep 13 00:50:21.064906 kernel: smpboot: Total of 2 processors activated (11173.74 BogoMIPS) Sep 13 00:50:21.064915 kernel: devtmpfs: initialized Sep 13 00:50:21.064924 kernel: x86/mm: Memory block size: 128MB Sep 13 00:50:21.064933 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 13 00:50:21.064943 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:50:21.064950 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:50:21.064956 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:50:21.064962 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:50:21.064970 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:50:21.064985 kernel: audit: type=2000 audit(1757724619.024:1): state=initialized audit_enabled=0 res=1 Sep 13 00:50:21.064996 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:50:21.065012 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:50:21.065022 kernel: cpuidle: using governor menu Sep 13 00:50:21.065030 kernel: ACPI: bus type PCI registered Sep 13 00:50:21.065036 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:50:21.065042 kernel: dca service started, version 1.12.1 Sep 13 00:50:21.065055 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:50:21.065061 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:50:21.065067 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:50:21.065085 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:50:21.065100 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:50:21.065106 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:50:21.065112 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:50:21.065128 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:50:21.065136 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:50:21.065148 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:50:21.065156 kernel: ACPI: Interpreter enabled Sep 13 00:50:21.065162 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:50:21.065173 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:50:21.065180 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:50:21.065196 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 13 00:50:21.065204 kernel: iommu: Default domain type: Translated Sep 13 00:50:21.065210 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:50:21.065219 kernel: vgaarb: loaded Sep 13 00:50:21.065227 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:50:21.065233 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:50:21.065242 kernel: PTP clock support registered Sep 13 00:50:21.065251 kernel: Registered efivars operations Sep 13 00:50:21.065256 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:50:21.065264 kernel: PCI: System does not support PCI Sep 13 00:50:21.065270 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 13 00:50:21.065279 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:50:21.065287 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:50:21.065293 kernel: pnp: PnP ACPI init Sep 13 00:50:21.065299 kernel: pnp: PnP ACPI: found 3 devices Sep 13 00:50:21.065305 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:50:21.065311 kernel: NET: Registered PF_INET protocol family Sep 13 00:50:21.065319 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:50:21.065330 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 13 00:50:21.065336 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:50:21.065342 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:50:21.065351 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Sep 13 00:50:21.065362 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 13 00:50:21.065368 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:50:21.065378 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:50:21.065386 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:50:21.065392 kernel: NET: Registered PF_XDP protocol family Sep 13 00:50:21.065402 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:50:21.065416 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 00:50:21.065425 kernel: software IO TLB: mapped [mem 0x000000003aa89000-0x000000003ea89000] (64MB) Sep 13 00:50:21.065435 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:50:21.065441 kernel: Initialise system trusted keyrings Sep 13 00:50:21.065449 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 13 00:50:21.065459 kernel: Key type asymmetric registered Sep 13 00:50:21.065465 kernel: Asymmetric key parser 'x509' registered Sep 13 00:50:21.065474 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:50:21.065486 kernel: io scheduler mq-deadline registered Sep 13 00:50:21.065492 kernel: io scheduler kyber registered Sep 13 00:50:21.065499 kernel: io scheduler bfq registered Sep 13 00:50:21.065505 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:50:21.065512 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:50:21.065522 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:50:21.065534 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 13 00:50:21.065544 kernel: i8042: PNP: No PS/2 controller found. Sep 13 00:50:21.065697 kernel: rtc_cmos 00:02: registered as rtc0 Sep 13 00:50:21.065816 kernel: rtc_cmos 00:02: setting system clock to 2025-09-13T00:50:20 UTC (1757724620) Sep 13 00:50:21.065922 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 13 00:50:21.065936 kernel: intel_pstate: CPU model not supported Sep 13 00:50:21.065948 kernel: efifb: probing for efifb Sep 13 00:50:21.065960 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 13 00:50:21.065972 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 13 00:50:21.065984 kernel: efifb: scrolling: redraw Sep 13 00:50:21.065999 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:50:21.066011 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:50:21.066024 kernel: fb0: EFI VGA frame buffer device Sep 13 00:50:21.066037 kernel: pstore: Registered efi as persistent store backend Sep 13 00:50:21.066049 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:50:21.066061 kernel: Segment Routing with IPv6 Sep 13 00:50:21.066073 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:50:21.066085 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:50:21.066097 kernel: Key type dns_resolver registered Sep 13 00:50:21.066111 kernel: IPI shorthand broadcast: enabled Sep 13 00:50:21.066123 kernel: sched_clock: Marking stable (882384400, 25535100)->(1127469900, -219550400) Sep 13 00:50:21.066135 kernel: registered taskstats version 1 Sep 13 00:50:21.066146 kernel: Loading compiled-in X.509 certificates Sep 13 00:50:21.066159 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:50:21.066171 kernel: Key type .fscrypt registered Sep 13 00:50:21.066182 kernel: Key type fscrypt-provisioning registered Sep 13 00:50:21.066195 kernel: pstore: Using crash dump compression: deflate Sep 13 00:50:21.066207 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:50:21.066221 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:50:21.066233 kernel: ima: No architecture policies found Sep 13 00:50:21.066245 kernel: clk: Disabling unused clocks Sep 13 00:50:21.066257 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:50:21.066269 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:50:21.066281 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:50:21.066293 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:50:21.066305 kernel: Run /init as init process Sep 13 00:50:21.066317 kernel: with arguments: Sep 13 00:50:21.066331 kernel: /init Sep 13 00:50:21.066342 kernel: with environment: Sep 13 00:50:21.066354 kernel: HOME=/ Sep 13 00:50:21.066366 kernel: TERM=linux Sep 13 00:50:21.066377 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:50:21.066392 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:50:21.066407 systemd[1]: Detected virtualization microsoft. Sep 13 00:50:21.066420 systemd[1]: Detected architecture x86-64. Sep 13 00:50:21.066434 systemd[1]: Running in initrd. Sep 13 00:50:21.066446 systemd[1]: No hostname configured, using default hostname. Sep 13 00:50:21.066458 systemd[1]: Hostname set to . Sep 13 00:50:21.066472 systemd[1]: Initializing machine ID from random generator. Sep 13 00:50:21.066484 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:50:21.066496 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:50:21.066509 systemd[1]: Reached target cryptsetup.target. Sep 13 00:50:21.066521 systemd[1]: Reached target paths.target. Sep 13 00:50:21.066543 systemd[1]: Reached target slices.target. Sep 13 00:50:21.066556 systemd[1]: Reached target swap.target. Sep 13 00:50:21.066568 systemd[1]: Reached target timers.target. Sep 13 00:50:21.066581 systemd[1]: Listening on iscsid.socket. Sep 13 00:50:21.066594 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:50:21.066607 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:50:21.066620 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:50:21.066632 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:50:21.066647 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:50:21.066660 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:50:21.066672 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:50:21.066685 systemd[1]: Reached target sockets.target. Sep 13 00:50:21.066698 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:50:21.066710 systemd[1]: Finished network-cleanup.service. Sep 13 00:50:21.066723 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:50:21.066736 systemd[1]: Starting systemd-journald.service... Sep 13 00:50:21.066749 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:50:21.066763 systemd[1]: Starting systemd-resolved.service... Sep 13 00:50:21.066776 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:50:21.066788 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:50:21.066802 kernel: audit: type=1130 audit(1757724621.048:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.066814 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:50:21.066831 systemd-journald[183]: Journal started Sep 13 00:50:21.067597 systemd-journald[183]: Runtime Journal (/run/log/journal/ddada323f61543d794e7087762684ef6) is 8.0M, max 159.0M, 151.0M free. Sep 13 00:50:21.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.071536 systemd[1]: Started systemd-journald.service. Sep 13 00:50:21.087998 kernel: audit: type=1130 audit(1757724621.069:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.088143 systemd-modules-load[184]: Inserted module 'overlay' Sep 13 00:50:21.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.112593 kernel: audit: type=1130 audit(1757724621.097:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.098585 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:50:21.119248 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:50:21.125030 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:50:21.139022 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:50:21.140118 systemd-resolved[185]: Positive Trust Anchors: Sep 13 00:50:21.140128 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:50:21.140158 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:50:21.177661 kernel: audit: type=1130 audit(1757724621.117:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.143691 systemd-resolved[185]: Defaulting to hostname 'linux'. Sep 13 00:50:21.181364 systemd[1]: Started systemd-resolved.service. Sep 13 00:50:21.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.181746 systemd[1]: Reached target nss-lookup.target. Sep 13 00:50:21.212227 kernel: audit: type=1130 audit(1757724621.179:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.212252 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:50:21.229428 kernel: audit: type=1130 audit(1757724621.179:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.215173 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:50:21.232954 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:50:21.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.255899 kernel: audit: type=1130 audit(1757724621.232:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.258201 dracut-cmdline[200]: dracut-dracut-053 Sep 13 00:50:21.262902 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:50:21.282312 kernel: Bridge firewalling registered Sep 13 00:50:21.265409 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 13 00:50:21.316549 kernel: SCSI subsystem initialized Sep 13 00:50:21.344670 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:50:21.344768 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:50:21.355813 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:50:21.355850 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:50:21.359987 systemd-modules-load[184]: Inserted module 'dm_multipath' Sep 13 00:50:21.361772 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:50:21.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.365216 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:50:21.385938 kernel: audit: type=1130 audit(1757724621.362:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.388870 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:50:21.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.406547 kernel: audit: type=1130 audit(1757724621.391:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.417553 kernel: iscsi: registered transport (tcp) Sep 13 00:50:21.446163 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:50:21.446228 kernel: QLogic iSCSI HBA Driver Sep 13 00:50:21.473955 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:50:21.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:21.479103 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:50:21.530553 kernel: raid6: avx512x4 gen() 42499 MB/s Sep 13 00:50:21.550541 kernel: raid6: avx512x4 xor() 9671 MB/s Sep 13 00:50:21.570541 kernel: raid6: avx512x2 gen() 41488 MB/s Sep 13 00:50:21.591539 kernel: raid6: avx512x2 xor() 26332 MB/s Sep 13 00:50:21.611540 kernel: raid6: avx512x1 gen() 41554 MB/s Sep 13 00:50:21.631550 kernel: raid6: avx512x1 xor() 24173 MB/s Sep 13 00:50:21.652543 kernel: raid6: avx2x4 gen() 34485 MB/s Sep 13 00:50:21.672549 kernel: raid6: avx2x4 xor() 9167 MB/s Sep 13 00:50:21.692538 kernel: raid6: avx2x2 gen() 33892 MB/s Sep 13 00:50:21.713542 kernel: raid6: avx2x2 xor() 21383 MB/s Sep 13 00:50:21.733540 kernel: raid6: avx2x1 gen() 26428 MB/s Sep 13 00:50:21.753540 kernel: raid6: avx2x1 xor() 17111 MB/s Sep 13 00:50:21.774541 kernel: raid6: sse2x4 gen() 10259 MB/s Sep 13 00:50:21.794541 kernel: raid6: sse2x4 xor() 5961 MB/s Sep 13 00:50:21.815541 kernel: raid6: sse2x2 gen() 10259 MB/s Sep 13 00:50:21.835541 kernel: raid6: sse2x2 xor() 6604 MB/s Sep 13 00:50:21.855539 kernel: raid6: sse2x1 gen() 9272 MB/s Sep 13 00:50:21.879646 kernel: raid6: sse2x1 xor() 5312 MB/s Sep 13 00:50:21.879660 kernel: raid6: using algorithm avx512x4 gen() 42499 MB/s Sep 13 00:50:21.879673 kernel: raid6: .... xor() 9671 MB/s, rmw enabled Sep 13 00:50:21.883189 kernel: raid6: using avx512x2 recovery algorithm Sep 13 00:50:21.904550 kernel: xor: automatically using best checksumming function avx Sep 13 00:50:22.006554 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:50:22.014723 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:50:22.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:22.019000 audit: BPF prog-id=7 op=LOAD Sep 13 00:50:22.019000 audit: BPF prog-id=8 op=LOAD Sep 13 00:50:22.020287 systemd[1]: Starting systemd-udevd.service... Sep 13 00:50:22.034509 systemd-udevd[384]: Using default interface naming scheme 'v252'. Sep 13 00:50:22.038712 systemd[1]: Started systemd-udevd.service. Sep 13 00:50:22.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:22.048729 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:50:22.063681 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Sep 13 00:50:22.092100 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:50:22.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:22.095229 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:50:22.133428 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:50:22.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:22.177548 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:50:22.206547 kernel: hv_vmbus: Vmbus version:5.2 Sep 13 00:50:22.218966 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:50:22.219015 kernel: AES CTR mode by8 optimization enabled Sep 13 00:50:22.219028 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 13 00:50:22.235542 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:50:22.250166 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 13 00:50:22.250221 kernel: hv_vmbus: registering driver hid_hyperv Sep 13 00:50:22.260105 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 13 00:50:22.269556 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 13 00:50:22.279624 kernel: hv_vmbus: registering driver hv_netvsc Sep 13 00:50:22.284541 kernel: hv_vmbus: registering driver hv_storvsc Sep 13 00:50:22.297097 kernel: scsi host0: storvsc_host_t Sep 13 00:50:22.297305 kernel: scsi host1: storvsc_host_t Sep 13 00:50:22.297330 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 13 00:50:22.308907 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 13 00:50:22.339367 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 13 00:50:22.348254 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:50:22.348283 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 13 00:50:22.366078 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 00:50:22.366260 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 13 00:50:22.366414 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:50:22.366588 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 13 00:50:22.366748 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 13 00:50:22.366901 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:50:22.366923 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:50:22.385544 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#62 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 13 00:50:22.393542 kernel: hv_netvsc 7c1e522e-73fa-7c1e-522e-73fa7c1e522e eth0: VF slot 1 added Sep 13 00:50:22.410878 kernel: hv_vmbus: registering driver hv_pci Sep 13 00:50:22.410940 kernel: hv_pci 09d2a93b-c3e7-463c-ad50-2d6155a6774c: PCI VMBus probing: Using version 0x10004 Sep 13 00:50:22.487770 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 13 00:50:22.487962 kernel: hv_pci 09d2a93b-c3e7-463c-ad50-2d6155a6774c: PCI host bridge to bus c3e7:00 Sep 13 00:50:22.488106 kernel: pci_bus c3e7:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 13 00:50:22.488283 kernel: pci_bus c3e7:00: No busn resource found for root bus, will use [bus 00-ff] Sep 13 00:50:22.488431 kernel: pci c3e7:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 13 00:50:22.488622 kernel: pci c3e7:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 13 00:50:22.488782 kernel: pci c3e7:00:02.0: enabling Extended Tags Sep 13 00:50:22.488939 kernel: pci c3e7:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c3e7:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 13 00:50:22.489091 kernel: pci_bus c3e7:00: busn_res: [bus 00-ff] end is updated to 00 Sep 13 00:50:22.489229 kernel: pci c3e7:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 13 00:50:22.579413 kernel: mlx5_core c3e7:00:02.0: enabling device (0000 -> 0002) Sep 13 00:50:22.888013 kernel: mlx5_core c3e7:00:02.0: firmware version: 16.30.5000 Sep 13 00:50:22.888207 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (441) Sep 13 00:50:22.888225 kernel: mlx5_core c3e7:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Sep 13 00:50:22.888374 kernel: mlx5_core c3e7:00:02.0: Supported tc offload range - chains: 1, prios: 1 Sep 13 00:50:22.888489 kernel: mlx5_core c3e7:00:02.0: mlx5e_tc_post_act_init:40:(pid 188): firmware level support is missing Sep 13 00:50:22.888609 kernel: hv_netvsc 7c1e522e-73fa-7c1e-522e-73fa7c1e522e eth0: VF registering: eth1 Sep 13 00:50:22.888703 kernel: mlx5_core c3e7:00:02.0 eth1: joined to eth0 Sep 13 00:50:22.731047 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:50:22.753172 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:50:22.982556 kernel: mlx5_core c3e7:00:02.0 enP50151s1: renamed from eth1 Sep 13 00:50:23.019520 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:50:23.038943 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:50:23.045069 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:50:23.052847 systemd[1]: Starting disk-uuid.service... Sep 13 00:50:23.068545 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:50:23.078549 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:50:23.087545 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:50:24.089008 disk-uuid[572]: The operation has completed successfully. Sep 13 00:50:24.091633 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:50:24.160823 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:50:24.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:24.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:24.160928 systemd[1]: Finished disk-uuid.service. Sep 13 00:50:24.175967 systemd[1]: Starting verity-setup.service... Sep 13 00:50:24.219553 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:50:24.597915 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:50:24.604748 systemd[1]: Finished verity-setup.service. Sep 13 00:50:24.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:24.609569 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:50:24.681572 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:50:24.681055 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:50:24.685140 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:50:24.689685 systemd[1]: Starting ignition-setup.service... Sep 13 00:50:24.694607 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:50:24.716612 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:50:24.716649 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:50:24.716662 kernel: BTRFS info (device sda6): has skinny extents Sep 13 00:50:24.761302 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:50:24.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:24.766000 audit: BPF prog-id=9 op=LOAD Sep 13 00:50:24.767271 systemd[1]: Starting systemd-networkd.service... Sep 13 00:50:24.789780 systemd-networkd[836]: lo: Link UP Sep 13 00:50:24.789790 systemd-networkd[836]: lo: Gained carrier Sep 13 00:50:24.793730 systemd-networkd[836]: Enumeration completed Sep 13 00:50:24.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:24.793823 systemd[1]: Started systemd-networkd.service. Sep 13 00:50:24.798135 systemd[1]: Reached target network.target. Sep 13 00:50:24.800125 systemd-networkd[836]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:50:24.802345 systemd[1]: Starting iscsiuio.service... Sep 13 00:50:24.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:24.812086 systemd[1]: Started iscsiuio.service. Sep 13 00:50:24.816335 systemd[1]: Starting iscsid.service... Sep 13 00:50:24.821029 systemd[1]: Started iscsid.service. Sep 13 00:50:24.824788 iscsid[841]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:50:24.824788 iscsid[841]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 13 00:50:24.824788 iscsid[841]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:50:24.824788 iscsid[841]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:50:24.824788 iscsid[841]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:50:24.824788 iscsid[841]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:50:24.824788 iscsid[841]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:50:24.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:24.857282 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:50:24.871142 kernel: mlx5_core c3e7:00:02.0 enP50151s1: Link up Sep 13 00:50:24.871367 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 00:50:24.876897 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:50:24.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:24.881387 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:50:24.883871 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:50:24.888846 systemd[1]: Reached target remote-fs.target. Sep 13 00:50:24.896270 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:50:24.908283 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:50:24.918896 kernel: hv_netvsc 7c1e522e-73fa-7c1e-522e-73fa7c1e522e eth0: Data path switched to VF: enP50151s1 Sep 13 00:50:24.919102 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:50:24.919437 systemd-networkd[836]: enP50151s1: Link UP Sep 13 00:50:24.919642 systemd-networkd[836]: eth0: Link UP Sep 13 00:50:24.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:24.920044 systemd-networkd[836]: eth0: Gained carrier Sep 13 00:50:24.926865 systemd-networkd[836]: enP50151s1: Gained carrier Sep 13 00:50:24.941583 systemd-networkd[836]: eth0: DHCPv4 address 10.200.4.42/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 13 00:50:24.954703 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:50:25.053140 systemd[1]: Finished ignition-setup.service. Sep 13 00:50:25.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:25.058733 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:50:26.199651 systemd-networkd[836]: eth0: Gained IPv6LL Sep 13 00:50:28.339976 ignition[863]: Ignition 2.14.0 Sep 13 00:50:28.339988 ignition[863]: Stage: fetch-offline Sep 13 00:50:28.340309 ignition[863]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:50:28.340358 ignition[863]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 00:50:28.380461 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:50:28.383616 ignition[863]: parsed url from cmdline: "" Sep 13 00:50:28.383626 ignition[863]: no config URL provided Sep 13 00:50:28.383637 ignition[863]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:50:28.383655 ignition[863]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:50:28.383663 ignition[863]: failed to fetch config: resource requires networking Sep 13 00:50:28.438296 ignition[863]: Ignition finished successfully Sep 13 00:50:28.447104 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:50:28.479946 kernel: kauditd_printk_skb: 18 callbacks suppressed Sep 13 00:50:28.479979 kernel: audit: type=1130 audit(1757724628.450:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:28.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:28.452041 systemd[1]: Starting ignition-fetch.service... Sep 13 00:50:28.460132 ignition[869]: Ignition 2.14.0 Sep 13 00:50:28.460137 ignition[869]: Stage: fetch Sep 13 00:50:28.460231 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:50:28.460250 ignition[869]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 00:50:28.464426 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:50:28.481245 ignition[869]: parsed url from cmdline: "" Sep 13 00:50:28.481253 ignition[869]: no config URL provided Sep 13 00:50:28.481261 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:50:28.481286 ignition[869]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:50:28.481335 ignition[869]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 13 00:50:28.546433 ignition[869]: GET result: OK Sep 13 00:50:28.546513 ignition[869]: config has been read from IMDS userdata Sep 13 00:50:28.546564 ignition[869]: parsing config with SHA512: 89f5f1ac0fed353a3fd0f0d610b01599e1a98c28e452ee9a3e6ee3a928b55ca44bcb284e95dadfbf3b7bece6384816c4c954f85544484327ec663d03c4a951d7 Sep 13 00:50:28.568129 unknown[869]: fetched base config from "system" Sep 13 00:50:28.568613 ignition[869]: fetch: fetch complete Sep 13 00:50:28.568143 unknown[869]: fetched base config from "system" Sep 13 00:50:28.592755 kernel: audit: type=1130 audit(1757724628.576:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:28.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:28.568618 ignition[869]: fetch: fetch passed Sep 13 00:50:28.568152 unknown[869]: fetched user config from "azure" Sep 13 00:50:28.568653 ignition[869]: Ignition finished successfully Sep 13 00:50:28.572195 systemd[1]: Finished ignition-fetch.service. Sep 13 00:50:28.577907 systemd[1]: Starting ignition-kargs.service... Sep 13 00:50:28.612355 ignition[875]: Ignition 2.14.0 Sep 13 00:50:28.612366 ignition[875]: Stage: kargs Sep 13 00:50:28.612500 ignition[875]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:50:28.612538 ignition[875]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 00:50:28.633100 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:50:28.635004 ignition[875]: kargs: kargs passed Sep 13 00:50:28.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:28.636364 systemd[1]: Finished ignition-kargs.service. Sep 13 00:50:28.659099 kernel: audit: type=1130 audit(1757724628.639:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:28.635065 ignition[875]: Ignition finished successfully Sep 13 00:50:28.641566 systemd[1]: Starting ignition-disks.service... Sep 13 00:50:28.650512 ignition[881]: Ignition 2.14.0 Sep 13 00:50:28.650522 ignition[881]: Stage: disks Sep 13 00:50:28.666969 systemd[1]: Finished ignition-disks.service. Sep 13 00:50:28.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:28.650667 ignition[881]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:50:28.690469 kernel: audit: type=1130 audit(1757724628.670:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:28.682493 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:50:28.650699 ignition[881]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 00:50:28.690514 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:50:28.656233 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:50:28.658163 ignition[881]: disks: disks passed Sep 13 00:50:28.658211 ignition[881]: Ignition finished successfully Sep 13 00:50:28.702967 systemd[1]: Reached target local-fs.target. Sep 13 00:50:28.705068 systemd[1]: Reached target sysinit.target. Sep 13 00:50:28.709308 systemd[1]: Reached target basic.target. Sep 13 00:50:28.712277 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:50:28.790715 systemd-fsck[889]: ROOT: clean, 629/7326000 files, 481084/7359488 blocks Sep 13 00:50:28.800503 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:50:28.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:28.803633 systemd[1]: Mounting sysroot.mount... Sep 13 00:50:28.822652 kernel: audit: type=1130 audit(1757724628.802:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:28.837397 systemd[1]: Mounted sysroot.mount. Sep 13 00:50:28.840694 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:50:28.840765 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:50:28.877427 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:50:28.882941 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 13 00:50:28.887626 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:50:28.888760 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:50:28.898228 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:50:28.949041 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:50:28.954345 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:50:28.967546 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (900) Sep 13 00:50:28.977133 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:50:28.977173 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:50:28.977187 kernel: BTRFS info (device sda6): has skinny extents Sep 13 00:50:28.981109 initrd-setup-root[905]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:50:28.990878 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:50:29.018335 initrd-setup-root[931]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:50:29.037804 initrd-setup-root[939]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:50:29.055760 initrd-setup-root[947]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:50:29.669000 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:50:29.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:29.672221 systemd[1]: Starting ignition-mount.service... Sep 13 00:50:29.693982 kernel: audit: type=1130 audit(1757724629.670:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:29.689210 systemd[1]: Starting sysroot-boot.service... Sep 13 00:50:29.697619 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 00:50:29.700901 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 00:50:29.719792 systemd[1]: Finished sysroot-boot.service. Sep 13 00:50:29.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:29.737545 kernel: audit: type=1130 audit(1757724629.723:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:29.839749 ignition[969]: INFO : Ignition 2.14.0 Sep 13 00:50:29.839749 ignition[969]: INFO : Stage: mount Sep 13 00:50:29.845020 ignition[969]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:50:29.845020 ignition[969]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 00:50:29.845020 ignition[969]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:50:29.876952 kernel: audit: type=1130 audit(1757724629.851:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:29.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:29.877023 ignition[969]: INFO : mount: mount passed Sep 13 00:50:29.877023 ignition[969]: INFO : Ignition finished successfully Sep 13 00:50:29.849507 systemd[1]: Finished ignition-mount.service. Sep 13 00:50:30.545081 coreos-metadata[899]: Sep 13 00:50:30.544 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 13 00:50:30.566015 coreos-metadata[899]: Sep 13 00:50:30.565 INFO Fetch successful Sep 13 00:50:30.598615 coreos-metadata[899]: Sep 13 00:50:30.598 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 13 00:50:30.613187 coreos-metadata[899]: Sep 13 00:50:30.613 INFO Fetch successful Sep 13 00:50:30.634203 coreos-metadata[899]: Sep 13 00:50:30.634 INFO wrote hostname ci-3510.3.8-n-2e01e92296 to /sysroot/etc/hostname Sep 13 00:50:30.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:30.636076 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 13 00:50:30.659390 kernel: audit: type=1130 audit(1757724630.640:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:30.642475 systemd[1]: Starting ignition-files.service... Sep 13 00:50:30.662705 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:50:30.683368 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (978) Sep 13 00:50:30.683411 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:50:30.683427 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:50:30.687142 kernel: BTRFS info (device sda6): has skinny extents Sep 13 00:50:30.698088 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:50:30.710147 ignition[997]: INFO : Ignition 2.14.0 Sep 13 00:50:30.710147 ignition[997]: INFO : Stage: files Sep 13 00:50:30.714377 ignition[997]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:50:30.714377 ignition[997]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 00:50:30.727826 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:50:30.748855 ignition[997]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:50:30.768750 ignition[997]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:50:30.768750 ignition[997]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:50:30.875540 ignition[997]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:50:30.879509 ignition[997]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:50:30.890908 unknown[997]: wrote ssh authorized keys file for user: core Sep 13 00:50:30.893866 ignition[997]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:50:30.906298 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:50:30.911874 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 13 00:50:30.995954 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:50:31.279629 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:50:31.294620 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:50:31.299268 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 00:50:31.517467 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:50:31.568640 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:50:31.568640 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:50:31.579211 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:50:31.579211 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:50:31.579211 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:50:31.579211 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:50:31.579211 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:50:31.579211 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:50:31.579211 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:50:31.613773 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:50:31.613773 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:50:31.613773 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:50:31.613773 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:50:31.613773 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 13 00:50:31.613773 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:50:31.613773 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1145606968" Sep 13 00:50:31.613773 ignition[997]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1145606968": device or resource busy Sep 13 00:50:31.613773 ignition[997]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1145606968", trying btrfs: device or resource busy Sep 13 00:50:31.613773 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1145606968" Sep 13 00:50:31.613773 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1145606968" Sep 13 00:50:31.613773 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1145606968" Sep 13 00:50:31.613773 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1145606968" Sep 13 00:50:31.613773 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 13 00:50:31.613059 systemd[1]: mnt-oem1145606968.mount: Deactivated successfully. Sep 13 00:50:31.690992 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:50:31.690992 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:50:31.690992 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3187938606" Sep 13 00:50:31.690992 ignition[997]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3187938606": device or resource busy Sep 13 00:50:31.690992 ignition[997]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3187938606", trying btrfs: device or resource busy Sep 13 00:50:31.690992 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3187938606" Sep 13 00:50:31.690992 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3187938606" Sep 13 00:50:31.690992 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem3187938606" Sep 13 00:50:31.690992 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem3187938606" Sep 13 00:50:31.690992 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:50:31.690992 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:50:31.690992 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 13 00:50:32.053682 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Sep 13 00:50:32.220394 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:50:32.220394 ignition[997]: INFO : files: op(14): [started] processing unit "waagent.service" Sep 13 00:50:32.220394 ignition[997]: INFO : files: op(14): [finished] processing unit "waagent.service" Sep 13 00:50:32.220394 ignition[997]: INFO : files: op(15): [started] processing unit "nvidia.service" Sep 13 00:50:32.266750 kernel: audit: type=1130 audit(1757724632.228:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.266861 ignition[997]: INFO : files: op(15): [finished] processing unit "nvidia.service" Sep 13 00:50:32.266861 ignition[997]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Sep 13 00:50:32.266861 ignition[997]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:50:32.266861 ignition[997]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:50:32.266861 ignition[997]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Sep 13 00:50:32.266861 ignition[997]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Sep 13 00:50:32.266861 ignition[997]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Sep 13 00:50:32.266861 ignition[997]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:50:32.266861 ignition[997]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:50:32.266861 ignition[997]: INFO : files: op(1a): [started] setting preset to enabled for "waagent.service" Sep 13 00:50:32.266861 ignition[997]: INFO : files: op(1a): [finished] setting preset to enabled for "waagent.service" Sep 13 00:50:32.266861 ignition[997]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:50:32.266861 ignition[997]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:50:32.266861 ignition[997]: INFO : files: files passed Sep 13 00:50:32.266861 ignition[997]: INFO : Ignition finished successfully Sep 13 00:50:32.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.224107 systemd[1]: Finished ignition-files.service. Sep 13 00:50:32.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.230415 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:50:32.331685 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:50:32.256889 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:50:32.257803 systemd[1]: Starting ignition-quench.service... Sep 13 00:50:32.273315 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:50:32.320609 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:50:32.322881 systemd[1]: Finished ignition-quench.service. Sep 13 00:50:32.326973 systemd[1]: Reached target ignition-complete.target. Sep 13 00:50:32.350379 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:50:32.363604 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:50:32.363705 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:50:32.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.368776 systemd[1]: Reached target initrd-fs.target. Sep 13 00:50:32.372328 systemd[1]: Reached target initrd.target. Sep 13 00:50:32.374087 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:50:32.374877 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:50:32.388063 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:50:32.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.392329 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:50:32.402903 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:50:32.407079 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:50:32.411800 systemd[1]: Stopped target timers.target. Sep 13 00:50:32.415450 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:50:32.417852 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:50:32.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.421964 systemd[1]: Stopped target initrd.target. Sep 13 00:50:32.425724 systemd[1]: Stopped target basic.target. Sep 13 00:50:32.429191 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:50:32.433601 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:50:32.437763 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:50:32.442207 systemd[1]: Stopped target remote-fs.target. Sep 13 00:50:32.445971 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:50:32.449968 systemd[1]: Stopped target sysinit.target. Sep 13 00:50:32.453481 systemd[1]: Stopped target local-fs.target. Sep 13 00:50:32.457345 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:50:32.461131 systemd[1]: Stopped target swap.target. Sep 13 00:50:32.464486 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:50:32.466776 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:50:32.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.471660 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:50:32.475634 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:50:32.477923 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:50:32.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.482143 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:50:32.484772 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:50:32.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.490060 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:50:32.492338 systemd[1]: Stopped ignition-files.service. Sep 13 00:50:32.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.496214 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:50:32.498899 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 13 00:50:32.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.504224 systemd[1]: Stopping ignition-mount.service... Sep 13 00:50:32.517296 iscsid[841]: iscsid shutting down. Sep 13 00:50:32.506223 systemd[1]: Stopping iscsid.service... Sep 13 00:50:32.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.527628 ignition[1036]: INFO : Ignition 2.14.0 Sep 13 00:50:32.527628 ignition[1036]: INFO : Stage: umount Sep 13 00:50:32.527628 ignition[1036]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:50:32.527628 ignition[1036]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 00:50:32.527628 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:50:32.507784 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:50:32.546994 ignition[1036]: INFO : umount: umount passed Sep 13 00:50:32.546994 ignition[1036]: INFO : Ignition finished successfully Sep 13 00:50:32.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.507946 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:50:32.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.512924 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:50:32.519470 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:50:32.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.521145 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:50:32.523425 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:50:32.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.523575 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:50:32.535779 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:50:32.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.535900 systemd[1]: Stopped iscsid.service. Sep 13 00:50:32.551588 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:50:32.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.551725 systemd[1]: Stopped ignition-mount.service. Sep 13 00:50:32.556019 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:50:32.556112 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:50:32.560004 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:50:32.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.560053 systemd[1]: Stopped ignition-disks.service. Sep 13 00:50:32.563550 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:50:32.563595 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:50:32.569016 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:50:32.569072 systemd[1]: Stopped ignition-fetch.service. Sep 13 00:50:32.577087 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:50:32.577152 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:50:32.581913 systemd[1]: Stopped target paths.target. Sep 13 00:50:32.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.584012 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:50:32.585587 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:50:32.588373 systemd[1]: Stopped target slices.target. Sep 13 00:50:32.590265 systemd[1]: Stopped target sockets.target. Sep 13 00:50:32.592108 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:50:32.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.652000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:50:32.592172 systemd[1]: Closed iscsid.socket. Sep 13 00:50:32.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.595114 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:50:32.595170 systemd[1]: Stopped ignition-setup.service. Sep 13 00:50:32.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.602208 systemd[1]: Stopping iscsiuio.service... Sep 13 00:50:32.606988 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:50:32.607458 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:50:32.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.607590 systemd[1]: Stopped iscsiuio.service. Sep 13 00:50:32.611696 systemd[1]: Stopped target network.target. Sep 13 00:50:32.615500 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:50:32.615549 systemd[1]: Closed iscsiuio.socket. Sep 13 00:50:32.619893 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:50:32.623526 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:50:32.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.627841 systemd-networkd[836]: eth0: DHCPv6 lease lost Sep 13 00:50:32.699000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:50:32.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.629701 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:50:32.629783 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:50:32.641400 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:50:32.645403 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:50:32.651786 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:50:32.651853 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:50:32.655436 systemd[1]: Stopping network-cleanup.service... Sep 13 00:50:32.656498 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:50:32.656602 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:50:32.657025 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:50:32.657083 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:50:32.663258 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:50:32.663306 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:50:32.668723 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:50:32.670651 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:50:32.676799 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:50:32.676943 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:50:32.683233 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:50:32.683290 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:50:32.686736 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:50:32.686776 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:50:32.692195 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:50:32.692246 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:50:32.699802 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:50:32.699859 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:50:32.703790 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:50:32.703842 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:50:32.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.766581 kernel: hv_netvsc 7c1e522e-73fa-7c1e-522e-73fa7c1e522e eth0: Data path switched from VF: enP50151s1 Sep 13 00:50:32.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.767097 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:50:32.769996 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:50:32.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:32.770074 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:50:32.775393 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:50:32.775488 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:50:32.801329 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:50:32.803567 systemd[1]: Stopped network-cleanup.service. Sep 13 00:50:32.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:33.262410 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:50:33.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:33.262548 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:50:33.266924 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:50:33.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:33.271082 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:50:33.271149 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:50:33.276763 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:50:33.290730 systemd[1]: Switching root. Sep 13 00:50:33.316262 systemd-journald[183]: Journal stopped Sep 13 00:50:50.662276 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 13 00:50:50.662302 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:50:50.662317 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:50:50.662325 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:50:50.662336 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:50:50.662345 kernel: SELinux: policy capability open_perms=1 Sep 13 00:50:50.662359 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:50:50.662367 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:50:50.662379 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:50:50.662388 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:50:50.662402 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:50:50.662410 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:50:50.662420 kernel: kauditd_printk_skb: 42 callbacks suppressed Sep 13 00:50:50.662431 kernel: audit: type=1403 audit(1757724635.944:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:50:50.662448 systemd[1]: Successfully loaded SELinux policy in 349.522ms. Sep 13 00:50:50.662460 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.011ms. Sep 13 00:50:50.662474 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:50:50.662485 systemd[1]: Detected virtualization microsoft. Sep 13 00:50:50.662496 systemd[1]: Detected architecture x86-64. Sep 13 00:50:50.662507 systemd[1]: Detected first boot. Sep 13 00:50:50.662519 systemd[1]: Hostname set to . Sep 13 00:50:50.662541 systemd[1]: Initializing machine ID from random generator. Sep 13 00:50:50.662552 kernel: audit: type=1400 audit(1757724636.819:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:50:50.662567 kernel: audit: type=1400 audit(1757724636.819:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:50:50.662575 kernel: audit: type=1334 audit(1757724636.834:84): prog-id=10 op=LOAD Sep 13 00:50:50.662589 kernel: audit: type=1334 audit(1757724636.834:85): prog-id=10 op=UNLOAD Sep 13 00:50:50.662599 kernel: audit: type=1334 audit(1757724636.848:86): prog-id=11 op=LOAD Sep 13 00:50:50.662611 kernel: audit: type=1334 audit(1757724636.848:87): prog-id=11 op=UNLOAD Sep 13 00:50:50.662624 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:50:50.662639 kernel: audit: type=1400 audit(1757724638.390:88): avc: denied { associate } for pid=1070 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:50:50.662655 kernel: audit: type=1300 audit(1757724638.390:88): arch=c000003e syscall=188 success=yes exit=0 a0=c00014d544 a1=c0000ce738 a2=c0000d6d40 a3=32 items=0 ppid=1053 pid=1070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:50.662670 kernel: audit: type=1327 audit(1757724638.390:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:50:50.662687 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:50:50.662703 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:50:50.662719 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:50:50.662735 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:50:50.662750 kernel: kauditd_printk_skb: 6 callbacks suppressed Sep 13 00:50:50.662764 kernel: audit: type=1334 audit(1757724649.960:90): prog-id=12 op=LOAD Sep 13 00:50:50.662777 kernel: audit: type=1334 audit(1757724649.960:91): prog-id=3 op=UNLOAD Sep 13 00:50:50.662793 kernel: audit: type=1334 audit(1757724649.966:92): prog-id=13 op=LOAD Sep 13 00:50:50.662811 kernel: audit: type=1334 audit(1757724649.971:93): prog-id=14 op=LOAD Sep 13 00:50:50.662826 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:50:50.662842 kernel: audit: type=1334 audit(1757724649.971:94): prog-id=4 op=UNLOAD Sep 13 00:50:50.662856 kernel: audit: type=1334 audit(1757724649.971:95): prog-id=5 op=UNLOAD Sep 13 00:50:50.662871 kernel: audit: type=1131 audit(1757724649.973:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.662885 systemd[1]: Stopped initrd-switch-root.service. Sep 13 00:50:50.662901 kernel: audit: type=1334 audit(1757724650.026:97): prog-id=12 op=UNLOAD Sep 13 00:50:50.662919 kernel: audit: type=1130 audit(1757724650.033:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.662934 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:50:50.662949 kernel: audit: type=1131 audit(1757724650.033:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.662965 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:50:50.662980 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:50:50.662995 systemd[1]: Created slice system-getty.slice. Sep 13 00:50:50.663011 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:50:50.663028 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:50:50.663044 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:50:50.663059 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:50:50.663075 systemd[1]: Created slice user.slice. Sep 13 00:50:50.663090 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:50:50.663104 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:50:50.663119 systemd[1]: Set up automount boot.automount. Sep 13 00:50:50.663135 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:50:50.663150 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 00:50:50.663168 systemd[1]: Stopped target initrd-fs.target. Sep 13 00:50:50.663183 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 00:50:50.663199 systemd[1]: Reached target integritysetup.target. Sep 13 00:50:50.663214 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:50:50.663229 systemd[1]: Reached target remote-fs.target. Sep 13 00:50:50.663244 systemd[1]: Reached target slices.target. Sep 13 00:50:50.663260 systemd[1]: Reached target swap.target. Sep 13 00:50:50.663275 systemd[1]: Reached target torcx.target. Sep 13 00:50:50.663293 systemd[1]: Reached target veritysetup.target. Sep 13 00:50:50.663308 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:50:50.663323 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:50:50.663339 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:50:50.663355 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:50:50.663372 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:50:50.663388 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:50:50.663403 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:50:50.663419 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:50:50.663435 systemd[1]: Mounting media.mount... Sep 13 00:50:50.663450 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:50:50.663466 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:50:50.663481 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:50:50.663501 systemd[1]: Mounting tmp.mount... Sep 13 00:50:50.663517 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:50:50.663539 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:50:50.663556 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:50:50.663571 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:50:50.663587 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:50:50.663602 systemd[1]: Starting modprobe@drm.service... Sep 13 00:50:50.663617 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:50:50.663633 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:50:50.663650 systemd[1]: Starting modprobe@loop.service... Sep 13 00:50:50.663666 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:50:50.663682 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:50:50.663697 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 00:50:50.663713 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:50:50.663729 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:50:50.663744 systemd[1]: Stopped systemd-journald.service. Sep 13 00:50:50.663760 systemd[1]: Starting systemd-journald.service... Sep 13 00:50:50.663776 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:50:50.663794 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:50:50.663809 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:50:50.663824 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:50:50.663841 kernel: loop: module loaded Sep 13 00:50:50.663854 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:50:50.663868 systemd[1]: Stopped verity-setup.service. Sep 13 00:50:50.663881 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:50:50.663895 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:50:50.663908 kernel: fuse: init (API version 7.34) Sep 13 00:50:50.663923 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:50:50.663936 systemd[1]: Mounted media.mount. Sep 13 00:50:50.663949 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:50:50.663961 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:50:50.663980 systemd[1]: Mounted tmp.mount. Sep 13 00:50:50.664001 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:50:50.664017 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:50:50.664040 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:50:50.664057 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:50:50.664073 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:50:50.664089 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:50:50.664107 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:50:50.664121 systemd[1]: Finished modprobe@drm.service. Sep 13 00:50:50.664135 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:50:50.664165 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:50:50.664181 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:50:50.664195 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:50:50.664206 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:50:50.664218 systemd[1]: Finished modprobe@loop.service. Sep 13 00:50:50.664233 systemd-journald[1151]: Journal started Sep 13 00:50:50.664288 systemd-journald[1151]: Runtime Journal (/run/log/journal/dc45200221314145a091239d9d0b31f8) is 8.0M, max 159.0M, 151.0M free. Sep 13 00:50:35.944000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:50:36.819000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:50:36.819000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:50:36.834000 audit: BPF prog-id=10 op=LOAD Sep 13 00:50:36.834000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:50:36.848000 audit: BPF prog-id=11 op=LOAD Sep 13 00:50:36.848000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:50:38.390000 audit[1070]: AVC avc: denied { associate } for pid=1070 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:50:38.390000 audit[1070]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d544 a1=c0000ce738 a2=c0000d6d40 a3=32 items=0 ppid=1053 pid=1070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:38.390000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:50:38.398000 audit[1070]: AVC avc: denied { associate } for pid=1070 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 00:50:38.398000 audit[1070]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d629 a2=1ed a3=0 items=2 ppid=1053 pid=1070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:38.398000 audit: CWD cwd="/" Sep 13 00:50:38.398000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:38.398000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:38.398000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:50:49.960000 audit: BPF prog-id=12 op=LOAD Sep 13 00:50:49.960000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:50:49.966000 audit: BPF prog-id=13 op=LOAD Sep 13 00:50:49.971000 audit: BPF prog-id=14 op=LOAD Sep 13 00:50:49.971000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:50:49.971000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:50:49.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.026000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:50:50.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.460000 audit: BPF prog-id=15 op=LOAD Sep 13 00:50:50.460000 audit: BPF prog-id=16 op=LOAD Sep 13 00:50:50.460000 audit: BPF prog-id=17 op=LOAD Sep 13 00:50:50.460000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:50:50.460000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:50:50.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.658000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:50:50.658000 audit[1151]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fffd5639430 a2=4000 a3=7fffd56394cc items=0 ppid=1 pid=1151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:50.658000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:50:49.959285 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:50:38.287077 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:50:49.959297 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 13 00:50:38.314974 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:50:49.973325 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:50:38.314992 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:50:38.315025 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:38Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 00:50:38.315033 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:38Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 00:50:38.315066 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:38Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 00:50:38.315077 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:38Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 00:50:38.315258 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:38Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 00:50:38.315284 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:50:38.315294 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:50:38.376724 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 00:50:38.376783 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 00:50:38.376805 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 00:50:38.376818 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 00:50:38.376836 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 00:50:38.376850 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 00:50:45.913543 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:45Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:50:45.913806 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:45Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:50:45.913916 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:45Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:50:45.914069 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:45Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:50:45.914112 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:45Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 00:50:45.914163 /usr/lib/systemd/system-generators/torcx-generator[1070]: time="2025-09-13T00:50:45Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 00:50:50.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.677152 systemd[1]: Started systemd-journald.service. Sep 13 00:50:50.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.677818 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:50:50.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.680367 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:50:50.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.682961 systemd[1]: Reached target network-pre.target. Sep 13 00:50:50.686577 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:50:50.689937 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:50:50.693747 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:50:50.746010 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:50:50.750732 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:50:50.752865 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:50:50.754147 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:50:50.756912 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:50:50.758168 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:50:50.763314 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:50:50.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.766195 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:50:50.768612 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:50:50.771980 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:50:50.780710 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:50:50.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.784278 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:50:50.804062 systemd-journald[1151]: Time spent on flushing to /var/log/journal/dc45200221314145a091239d9d0b31f8 is 19.239ms for 1159 entries. Sep 13 00:50:50.804062 systemd-journald[1151]: System Journal (/var/log/journal/dc45200221314145a091239d9d0b31f8) is 8.0M, max 2.6G, 2.6G free. Sep 13 00:50:50.898002 systemd-journald[1151]: Received client request to flush runtime journal. Sep 13 00:50:50.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.825304 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:50:50.898285 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:50:50.828028 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:50:50.898967 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:50:50.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:50.983509 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:50:50.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:51.669065 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:50:51.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:52.444738 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:50:52.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:52.447000 audit: BPF prog-id=18 op=LOAD Sep 13 00:50:52.447000 audit: BPF prog-id=19 op=LOAD Sep 13 00:50:52.448000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:50:52.448000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:50:52.449211 systemd[1]: Starting systemd-udevd.service... Sep 13 00:50:52.466051 systemd-udevd[1196]: Using default interface naming scheme 'v252'. Sep 13 00:50:53.930333 systemd[1]: Started systemd-udevd.service. Sep 13 00:50:53.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:53.933000 audit: BPF prog-id=20 op=LOAD Sep 13 00:50:53.934794 systemd[1]: Starting systemd-networkd.service... Sep 13 00:50:53.970664 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 13 00:50:54.027574 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:50:54.036000 audit[1210]: AVC avc: denied { confidentiality } for pid=1210 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:50:54.047551 kernel: hv_vmbus: registering driver hv_balloon Sep 13 00:50:54.062550 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 13 00:50:54.036000 audit[1210]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5632f4161680 a1=f83c a2=7fc198eb5bc5 a3=5 items=12 ppid=1196 pid=1210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:50:54.036000 audit: CWD cwd="/" Sep 13 00:50:54.036000 audit: PATH item=0 name=(null) inode=1238 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:54.036000 audit: PATH item=1 name=(null) inode=15582 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:54.036000 audit: PATH item=2 name=(null) inode=15582 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:54.036000 audit: PATH item=3 name=(null) inode=15583 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:54.036000 audit: PATH item=4 name=(null) inode=15582 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:54.036000 audit: PATH item=5 name=(null) inode=15584 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:54.036000 audit: PATH item=6 name=(null) inode=15582 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:54.036000 audit: PATH item=7 name=(null) inode=15585 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:54.036000 audit: PATH item=8 name=(null) inode=15582 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:54.036000 audit: PATH item=9 name=(null) inode=15586 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:54.036000 audit: PATH item=10 name=(null) inode=15582 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:54.036000 audit: PATH item=11 name=(null) inode=15587 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:50:54.036000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:50:54.075902 kernel: hv_utils: Registering HyperV Utility Driver Sep 13 00:50:54.075974 kernel: hv_vmbus: registering driver hv_utils Sep 13 00:50:54.087554 kernel: hv_vmbus: registering driver hyperv_fb Sep 13 00:50:54.087629 kernel: hv_utils: Shutdown IC version 3.2 Sep 13 00:50:54.093731 kernel: hv_utils: Heartbeat IC version 3.0 Sep 13 00:50:54.093795 kernel: hv_utils: TimeSync IC version 4.0 Sep 13 00:50:54.305919 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 13 00:50:54.305974 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 13 00:50:54.096000 audit: BPF prog-id=21 op=LOAD Sep 13 00:50:54.096000 audit: BPF prog-id=22 op=LOAD Sep 13 00:50:54.096000 audit: BPF prog-id=23 op=LOAD Sep 13 00:50:54.305902 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:50:54.319672 kernel: Console: switching to colour dummy device 80x25 Sep 13 00:50:54.329237 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:50:54.348046 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 13 00:50:54.387216 systemd[1]: Started systemd-userdbd.service. Sep 13 00:50:54.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:54.612320 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Sep 13 00:50:54.641184 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:50:54.733422 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:50:54.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:54.737227 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:50:55.102229 lvm[1273]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:50:55.155900 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:50:55.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.158931 systemd[1]: Reached target cryptsetup.target. Sep 13 00:50:55.162577 systemd[1]: Starting lvm2-activation.service... Sep 13 00:50:55.168137 lvm[1274]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:50:55.192983 systemd[1]: Finished lvm2-activation.service. Sep 13 00:50:55.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.195355 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:50:55.197963 kernel: kauditd_printk_skb: 65 callbacks suppressed Sep 13 00:50:55.198000 kernel: audit: type=1130 audit(1757724655.194:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.202314 systemd-networkd[1203]: lo: Link UP Sep 13 00:50:55.202323 systemd-networkd[1203]: lo: Gained carrier Sep 13 00:50:55.202892 systemd-networkd[1203]: Enumeration completed Sep 13 00:50:55.211476 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:50:55.211518 systemd[1]: Reached target local-fs.target. Sep 13 00:50:55.213551 systemd[1]: Reached target machines.target. Sep 13 00:50:55.216720 systemd[1]: Starting ldconfig.service... Sep 13 00:50:55.238136 systemd-networkd[1203]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:50:55.249351 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:50:55.249423 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:50:55.250544 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:50:55.253806 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:50:55.257669 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:50:55.261137 systemd[1]: Starting systemd-sysext.service... Sep 13 00:50:55.263231 systemd[1]: Started systemd-networkd.service. Sep 13 00:50:55.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.266829 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:50:55.279584 kernel: audit: type=1130 audit(1757724655.264:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.295104 kernel: mlx5_core c3e7:00:02.0 enP50151s1: Link up Sep 13 00:50:55.295552 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 00:50:55.322062 kernel: hv_netvsc 7c1e522e-73fa-7c1e-522e-73fa7c1e522e eth0: Data path switched to VF: enP50151s1 Sep 13 00:50:55.322774 systemd-networkd[1203]: enP50151s1: Link UP Sep 13 00:50:55.323009 systemd-networkd[1203]: eth0: Link UP Sep 13 00:50:55.323091 systemd-networkd[1203]: eth0: Gained carrier Sep 13 00:50:55.327326 systemd-networkd[1203]: enP50151s1: Gained carrier Sep 13 00:50:55.333127 systemd-networkd[1203]: eth0: DHCPv4 address 10.200.4.42/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 13 00:50:55.379353 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1276 (bootctl) Sep 13 00:50:55.380874 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:50:55.448755 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:50:55.466120 kernel: audit: type=1130 audit(1757724655.449:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.450553 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:50:55.467159 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:50:55.468478 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:50:55.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.484112 kernel: audit: type=1130 audit(1757724655.470:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.500914 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:50:55.501134 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:50:55.545050 kernel: loop0: detected capacity change from 0 to 229808 Sep 13 00:50:55.613053 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:50:55.632050 kernel: loop1: detected capacity change from 0 to 229808 Sep 13 00:50:55.641983 (sd-sysext)[1289]: Using extensions 'kubernetes'. Sep 13 00:50:55.642407 (sd-sysext)[1289]: Merged extensions into '/usr'. Sep 13 00:50:55.657242 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:50:55.658504 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:50:55.660688 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:50:55.662129 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:50:55.665899 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:50:55.668875 systemd[1]: Starting modprobe@loop.service... Sep 13 00:50:55.670980 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:50:55.671192 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:50:55.671367 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:50:55.673803 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:50:55.676180 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:50:55.676331 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:50:55.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.679432 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:50:55.679564 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:50:55.691055 kernel: audit: type=1130 audit(1757724655.677:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.704935 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:50:55.705079 systemd[1]: Finished modprobe@loop.service. Sep 13 00:50:55.706137 kernel: audit: type=1131 audit(1757724655.677:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.706209 kernel: audit: type=1130 audit(1757724655.703:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.706238 kernel: audit: type=1131 audit(1757724655.703:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.736740 systemd[1]: Finished systemd-sysext.service. Sep 13 00:50:55.761916 kernel: audit: type=1130 audit(1757724655.734:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.761996 kernel: audit: type=1131 audit(1757724655.734:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:55.764730 systemd[1]: Starting ensure-sysext.service... Sep 13 00:50:55.766711 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:50:55.766785 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:50:55.768008 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:50:55.774630 systemd[1]: Reloading. Sep 13 00:50:55.831906 /usr/lib/systemd/system-generators/torcx-generator[1316]: time="2025-09-13T00:50:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:50:55.848418 /usr/lib/systemd/system-generators/torcx-generator[1316]: time="2025-09-13T00:50:55Z" level=info msg="torcx already run" Sep 13 00:50:55.915599 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:50:55.915618 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:50:55.930405 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:50:55.989000 audit: BPF prog-id=24 op=LOAD Sep 13 00:50:55.989000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:50:55.989000 audit: BPF prog-id=25 op=LOAD Sep 13 00:50:55.989000 audit: BPF prog-id=26 op=LOAD Sep 13 00:50:55.989000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:50:55.989000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:50:55.990000 audit: BPF prog-id=27 op=LOAD Sep 13 00:50:55.990000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:50:55.991000 audit: BPF prog-id=28 op=LOAD Sep 13 00:50:55.991000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:50:55.991000 audit: BPF prog-id=29 op=LOAD Sep 13 00:50:55.991000 audit: BPF prog-id=30 op=LOAD Sep 13 00:50:55.991000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:50:55.991000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:50:55.991000 audit: BPF prog-id=31 op=LOAD Sep 13 00:50:55.991000 audit: BPF prog-id=32 op=LOAD Sep 13 00:50:55.991000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:50:55.991000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:50:56.004621 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:50:56.004843 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:50:56.006084 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:50:56.008517 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:50:56.011222 systemd[1]: Starting modprobe@loop.service... Sep 13 00:50:56.012691 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:50:56.012891 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:50:56.013117 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:50:56.016742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:50:56.016905 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:50:56.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:56.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:56.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:56.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:56.018413 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:50:56.018523 systemd[1]: Finished modprobe@loop.service. Sep 13 00:50:56.022835 systemd[1]: Finished ensure-sysext.service. Sep 13 00:50:56.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:56.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:56.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:56.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:56.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:56.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:56.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:56.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:56.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:56.024057 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:50:56.024182 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:50:56.025461 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:50:56.025641 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:50:56.027227 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:50:56.030042 systemd[1]: Starting modprobe@drm.service... Sep 13 00:50:56.032208 systemd[1]: Starting modprobe@loop.service... Sep 13 00:50:56.035438 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:50:56.035500 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:50:56.035561 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:50:56.035628 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:50:56.036014 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:50:56.036189 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:50:56.037815 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:50:56.037920 systemd[1]: Finished modprobe@drm.service. Sep 13 00:50:56.038241 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:50:56.038336 systemd[1]: Finished modprobe@loop.service. Sep 13 00:50:56.038574 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:50:56.162416 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:50:56.388634 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:50:56.681946 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:50:56.789193 systemd-fsck[1285]: fsck.fat 4.2 (2021-01-31) Sep 13 00:50:56.789193 systemd-fsck[1285]: /dev/sda1: 790 files, 120761/258078 clusters Sep 13 00:50:56.791333 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:50:56.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:56.795967 systemd[1]: Mounting boot.mount... Sep 13 00:50:56.810069 systemd[1]: Mounted boot.mount. Sep 13 00:50:56.825998 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:50:56.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:57.316130 systemd-networkd[1203]: eth0: Gained IPv6LL Sep 13 00:50:57.321770 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:50:57.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:59.868721 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:50:59.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:59.872859 systemd[1]: Starting audit-rules.service... Sep 13 00:50:59.876135 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:50:59.879342 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:50:59.881000 audit: BPF prog-id=33 op=LOAD Sep 13 00:50:59.884122 systemd[1]: Starting systemd-resolved.service... Sep 13 00:50:59.889000 audit: BPF prog-id=34 op=LOAD Sep 13 00:50:59.891345 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:50:59.894264 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:50:59.912000 audit[1396]: SYSTEM_BOOT pid=1396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:50:59.918438 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:50:59.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:59.954592 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:50:59.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:50:59.957243 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:51:00.052019 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:51:00.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:00.054457 systemd[1]: Reached target time-set.target. Sep 13 00:51:00.113739 systemd-resolved[1393]: Positive Trust Anchors: Sep 13 00:51:00.113753 systemd-resolved[1393]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:51:00.113788 systemd-resolved[1393]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:51:00.266237 systemd-resolved[1393]: Using system hostname 'ci-3510.3.8-n-2e01e92296'. Sep 13 00:51:00.267704 systemd[1]: Started systemd-resolved.service. Sep 13 00:51:00.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:00.270333 systemd[1]: Reached target network.target. Sep 13 00:51:00.274079 kernel: kauditd_printk_skb: 42 callbacks suppressed Sep 13 00:51:00.274182 kernel: audit: type=1130 audit(1757724660.269:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:00.274866 systemd-timesyncd[1395]: Contacted time server 176.58.127.131:123 (0.flatcar.pool.ntp.org). Sep 13 00:51:00.274933 systemd-timesyncd[1395]: Initial clock synchronization to Sat 2025-09-13 00:51:00.271119 UTC. Sep 13 00:51:00.290278 systemd[1]: Reached target network-online.target. Sep 13 00:51:00.292760 systemd[1]: Reached target nss-lookup.target. Sep 13 00:51:00.295499 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:51:00.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:00.314050 kernel: audit: type=1130 audit(1757724660.297:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:51:00.418000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:51:00.420667 systemd[1]: Finished audit-rules.service. Sep 13 00:51:00.421271 augenrules[1411]: No rules Sep 13 00:51:00.418000 audit[1411]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd983229c0 a2=420 a3=0 items=0 ppid=1390 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:00.448964 kernel: audit: type=1305 audit(1757724660.418:202): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:51:00.449060 kernel: audit: type=1300 audit(1757724660.418:202): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd983229c0 a2=420 a3=0 items=0 ppid=1390 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:51:00.418000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:51:00.460326 kernel: audit: type=1327 audit(1757724660.418:202): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:51:05.546772 ldconfig[1275]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:51:05.557185 systemd[1]: Finished ldconfig.service. Sep 13 00:51:05.560934 systemd[1]: Starting systemd-update-done.service... Sep 13 00:51:05.583431 systemd[1]: Finished systemd-update-done.service. Sep 13 00:51:05.585810 systemd[1]: Reached target sysinit.target. Sep 13 00:51:05.588106 systemd[1]: Started motdgen.path. Sep 13 00:51:05.590017 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:51:05.593199 systemd[1]: Started logrotate.timer. Sep 13 00:51:05.594989 systemd[1]: Started mdadm.timer. Sep 13 00:51:05.596647 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:51:05.598944 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:51:05.598984 systemd[1]: Reached target paths.target. Sep 13 00:51:05.600905 systemd[1]: Reached target timers.target. Sep 13 00:51:05.603287 systemd[1]: Listening on dbus.socket. Sep 13 00:51:05.605982 systemd[1]: Starting docker.socket... Sep 13 00:51:05.638021 systemd[1]: Listening on sshd.socket. Sep 13 00:51:05.640552 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:51:05.640997 systemd[1]: Listening on docker.socket. Sep 13 00:51:05.643012 systemd[1]: Reached target sockets.target. Sep 13 00:51:05.645195 systemd[1]: Reached target basic.target. Sep 13 00:51:05.647099 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:51:05.647129 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:51:05.648089 systemd[1]: Starting containerd.service... Sep 13 00:51:05.651215 systemd[1]: Starting dbus.service... Sep 13 00:51:05.654626 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:51:05.657827 systemd[1]: Starting extend-filesystems.service... Sep 13 00:51:05.659766 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:51:05.685128 systemd[1]: Starting kubelet.service... Sep 13 00:51:05.688985 systemd[1]: Starting motdgen.service... Sep 13 00:51:05.692856 systemd[1]: Started nvidia.service. Sep 13 00:51:05.695961 systemd[1]: Starting prepare-helm.service... Sep 13 00:51:05.698757 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:51:05.702257 systemd[1]: Starting sshd-keygen.service... Sep 13 00:51:05.706600 systemd[1]: Starting systemd-logind.service... Sep 13 00:51:05.710428 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:51:05.710519 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:51:05.711006 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:51:05.711814 systemd[1]: Starting update-engine.service... Sep 13 00:51:05.714909 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:51:05.722384 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:51:05.722604 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:51:05.770026 jq[1421]: false Sep 13 00:51:05.770706 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:51:05.772141 jq[1433]: true Sep 13 00:51:05.770896 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:51:05.784283 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:51:05.784503 systemd[1]: Finished motdgen.service. Sep 13 00:51:05.814888 extend-filesystems[1422]: Found loop1 Sep 13 00:51:05.818673 extend-filesystems[1422]: Found sda Sep 13 00:51:05.818673 extend-filesystems[1422]: Found sda1 Sep 13 00:51:05.818673 extend-filesystems[1422]: Found sda2 Sep 13 00:51:05.818673 extend-filesystems[1422]: Found sda3 Sep 13 00:51:05.818673 extend-filesystems[1422]: Found usr Sep 13 00:51:05.818673 extend-filesystems[1422]: Found sda4 Sep 13 00:51:05.818673 extend-filesystems[1422]: Found sda6 Sep 13 00:51:05.818673 extend-filesystems[1422]: Found sda7 Sep 13 00:51:05.818673 extend-filesystems[1422]: Found sda9 Sep 13 00:51:05.818673 extend-filesystems[1422]: Checking size of /dev/sda9 Sep 13 00:51:05.854712 jq[1443]: true Sep 13 00:51:05.881068 tar[1437]: linux-amd64/LICENSE Sep 13 00:51:05.881068 tar[1437]: linux-amd64/helm Sep 13 00:51:05.892152 systemd-logind[1431]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:51:05.897132 systemd-logind[1431]: New seat seat0. Sep 13 00:51:05.918673 env[1445]: time="2025-09-13T00:51:05.917052705Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:51:05.952730 extend-filesystems[1422]: Old size kept for /dev/sda9 Sep 13 00:51:05.952730 extend-filesystems[1422]: Found sr0 Sep 13 00:51:05.952342 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:51:05.952550 systemd[1]: Finished extend-filesystems.service. Sep 13 00:51:06.029670 env[1445]: time="2025-09-13T00:51:06.029629052Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:51:06.030864 env[1445]: time="2025-09-13T00:51:06.030836852Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:06.034361 env[1445]: time="2025-09-13T00:51:06.033917942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:51:06.034361 env[1445]: time="2025-09-13T00:51:06.033954436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:06.034361 env[1445]: time="2025-09-13T00:51:06.034200895Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:51:06.034361 env[1445]: time="2025-09-13T00:51:06.034223292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:06.034361 env[1445]: time="2025-09-13T00:51:06.034241788Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:51:06.034361 env[1445]: time="2025-09-13T00:51:06.034255386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:06.034361 env[1445]: time="2025-09-13T00:51:06.034349471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:06.037905 env[1445]: time="2025-09-13T00:51:06.034598629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:51:06.037905 env[1445]: time="2025-09-13T00:51:06.034761003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:51:06.037905 env[1445]: time="2025-09-13T00:51:06.034782799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:51:06.037905 env[1445]: time="2025-09-13T00:51:06.034848488Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:51:06.037905 env[1445]: time="2025-09-13T00:51:06.034865085Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:51:06.035427 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:51:06.039865 bash[1473]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:51:06.060126 env[1445]: time="2025-09-13T00:51:06.060082013Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:51:06.060267 env[1445]: time="2025-09-13T00:51:06.060251285Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:51:06.060341 env[1445]: time="2025-09-13T00:51:06.060329072Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:51:06.060443 env[1445]: time="2025-09-13T00:51:06.060427156Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:51:06.060512 env[1445]: time="2025-09-13T00:51:06.060499344Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:51:06.060630 env[1445]: time="2025-09-13T00:51:06.060615925Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:51:06.060696 env[1445]: time="2025-09-13T00:51:06.060683114Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:51:06.060757 env[1445]: time="2025-09-13T00:51:06.060743104Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:51:06.060826 env[1445]: time="2025-09-13T00:51:06.060814292Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:51:06.060885 env[1445]: time="2025-09-13T00:51:06.060873382Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:51:06.060945 env[1445]: time="2025-09-13T00:51:06.060933272Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:51:06.061023 env[1445]: time="2025-09-13T00:51:06.061008260Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:51:06.061253 env[1445]: time="2025-09-13T00:51:06.061235222Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:51:06.061411 env[1445]: time="2025-09-13T00:51:06.061396695Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:51:06.061793 env[1445]: time="2025-09-13T00:51:06.061771833Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:51:06.068571 env[1445]: time="2025-09-13T00:51:06.067391404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:51:06.070117 env[1445]: time="2025-09-13T00:51:06.070091557Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:51:06.070300 env[1445]: time="2025-09-13T00:51:06.070258329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:51:06.070394 env[1445]: time="2025-09-13T00:51:06.070379709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:51:06.070475 env[1445]: time="2025-09-13T00:51:06.070462895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:51:06.070549 env[1445]: time="2025-09-13T00:51:06.070537883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:51:06.070617 env[1445]: time="2025-09-13T00:51:06.070605972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:51:06.070683 env[1445]: time="2025-09-13T00:51:06.070662962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:51:06.070750 env[1445]: time="2025-09-13T00:51:06.070738150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:51:06.070824 env[1445]: time="2025-09-13T00:51:06.070812438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:51:06.070904 env[1445]: time="2025-09-13T00:51:06.070893924Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:51:06.071224 env[1445]: time="2025-09-13T00:51:06.071197674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:51:06.071319 env[1445]: time="2025-09-13T00:51:06.071305956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:51:06.072509 env[1445]: time="2025-09-13T00:51:06.072487060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:51:06.072621 env[1445]: time="2025-09-13T00:51:06.072603241Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:51:06.072729 env[1445]: time="2025-09-13T00:51:06.072709824Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:51:06.072812 env[1445]: time="2025-09-13T00:51:06.072799209Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:51:06.072904 env[1445]: time="2025-09-13T00:51:06.072890194Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:51:06.073007 env[1445]: time="2025-09-13T00:51:06.072993377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:51:06.073451 env[1445]: time="2025-09-13T00:51:06.073371914Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:51:06.108623 env[1445]: time="2025-09-13T00:51:06.075917893Z" level=info msg="Connect containerd service" Sep 13 00:51:06.108623 env[1445]: time="2025-09-13T00:51:06.076244139Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:51:06.108623 env[1445]: time="2025-09-13T00:51:06.077140590Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:51:06.108623 env[1445]: time="2025-09-13T00:51:06.077271169Z" level=info msg="Start subscribing containerd event" Sep 13 00:51:06.108623 env[1445]: time="2025-09-13T00:51:06.077333559Z" level=info msg="Start recovering state" Sep 13 00:51:06.108623 env[1445]: time="2025-09-13T00:51:06.077406047Z" level=info msg="Start event monitor" Sep 13 00:51:06.108623 env[1445]: time="2025-09-13T00:51:06.077426743Z" level=info msg="Start snapshots syncer" Sep 13 00:51:06.108623 env[1445]: time="2025-09-13T00:51:06.077439241Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:51:06.108623 env[1445]: time="2025-09-13T00:51:06.077449439Z" level=info msg="Start streaming server" Sep 13 00:51:06.108623 env[1445]: time="2025-09-13T00:51:06.077487533Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:51:06.108623 env[1445]: time="2025-09-13T00:51:06.077544424Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:51:06.108623 env[1445]: time="2025-09-13T00:51:06.091465620Z" level=info msg="containerd successfully booted in 0.184499s" Sep 13 00:51:06.077679 systemd[1]: Started containerd.service. Sep 13 00:51:06.126961 systemd[1]: nvidia.service: Deactivated successfully. Sep 13 00:51:06.456599 dbus-daemon[1420]: [system] SELinux support is enabled Sep 13 00:51:06.456772 systemd[1]: Started dbus.service. Sep 13 00:51:06.461110 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:51:06.461143 systemd[1]: Reached target system-config.target. Sep 13 00:51:06.463519 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:51:06.463542 systemd[1]: Reached target user-config.target. Sep 13 00:51:06.467522 systemd[1]: Started systemd-logind.service. Sep 13 00:51:06.467594 dbus-daemon[1420]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:51:06.623807 tar[1437]: linux-amd64/README.md Sep 13 00:51:06.630178 systemd[1]: Finished prepare-helm.service. Sep 13 00:51:06.797647 update_engine[1432]: I0913 00:51:06.782969 1432 main.cc:92] Flatcar Update Engine starting Sep 13 00:51:06.849892 systemd[1]: Started update-engine.service. Sep 13 00:51:06.851341 update_engine[1432]: I0913 00:51:06.851243 1432 update_check_scheduler.cc:74] Next update check in 3m40s Sep 13 00:51:06.854843 systemd[1]: Started locksmithd.service. Sep 13 00:51:07.273026 systemd[1]: Started kubelet.service. Sep 13 00:51:07.940953 kubelet[1525]: E0913 00:51:07.940904 1525 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:51:07.943677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:51:07.943836 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:51:07.944125 systemd[1]: kubelet.service: Consumed 1.248s CPU time. Sep 13 00:51:08.124975 sshd_keygen[1442]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:51:08.143662 systemd[1]: Finished sshd-keygen.service. Sep 13 00:51:08.147129 systemd[1]: Starting issuegen.service... Sep 13 00:51:08.150235 systemd[1]: Started waagent.service. Sep 13 00:51:08.156831 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:51:08.157005 systemd[1]: Finished issuegen.service. Sep 13 00:51:08.160411 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:51:08.181939 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:51:08.185757 systemd[1]: Started getty@tty1.service. Sep 13 00:51:08.189139 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:51:08.191491 systemd[1]: Reached target getty.target. Sep 13 00:51:08.193562 systemd[1]: Reached target multi-user.target. Sep 13 00:51:08.197406 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:51:08.207186 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:51:08.207363 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:51:08.210225 systemd[1]: Startup finished in 1.038s (kernel) + 14.612s (initrd) + 32.681s (userspace) = 48.333s. Sep 13 00:51:08.270748 locksmithd[1522]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:51:09.098294 login[1547]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 00:51:09.099817 login[1548]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 00:51:09.245519 systemd[1]: Created slice user-500.slice. Sep 13 00:51:09.246955 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:51:09.249091 systemd-logind[1431]: New session 2 of user core. Sep 13 00:51:09.252959 systemd-logind[1431]: New session 1 of user core. Sep 13 00:51:09.295896 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:51:09.297532 systemd[1]: Starting user@500.service... Sep 13 00:51:09.338474 (systemd)[1553]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:09.731861 systemd[1553]: Queued start job for default target default.target. Sep 13 00:51:09.732505 systemd[1553]: Reached target paths.target. Sep 13 00:51:09.732534 systemd[1553]: Reached target sockets.target. Sep 13 00:51:09.732551 systemd[1553]: Reached target timers.target. Sep 13 00:51:09.732566 systemd[1553]: Reached target basic.target. Sep 13 00:51:09.732682 systemd[1]: Started user@500.service. Sep 13 00:51:09.733799 systemd[1]: Started session-1.scope. Sep 13 00:51:09.734430 systemd[1553]: Reached target default.target. Sep 13 00:51:09.734554 systemd[1]: Started session-2.scope. Sep 13 00:51:09.735543 systemd[1553]: Startup finished in 391ms. Sep 13 00:51:18.046472 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:51:18.046714 systemd[1]: Stopped kubelet.service. Sep 13 00:51:18.046763 systemd[1]: kubelet.service: Consumed 1.248s CPU time. Sep 13 00:51:18.048311 systemd[1]: Starting kubelet.service... Sep 13 00:51:18.827728 systemd[1]: Started kubelet.service. Sep 13 00:51:18.950848 kubelet[1579]: E0913 00:51:18.950806 1579 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:51:18.953828 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:51:18.953987 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:51:19.605945 waagent[1542]: 2025-09-13T00:51:19.605839Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Sep 13 00:51:19.642044 waagent[1542]: 2025-09-13T00:51:19.641947Z INFO Daemon Daemon OS: flatcar 3510.3.8 Sep 13 00:51:19.644708 waagent[1542]: 2025-09-13T00:51:19.644650Z INFO Daemon Daemon Python: 3.9.16 Sep 13 00:51:19.647396 waagent[1542]: 2025-09-13T00:51:19.647317Z INFO Daemon Daemon Run daemon Sep 13 00:51:19.650485 waagent[1542]: 2025-09-13T00:51:19.650024Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Sep 13 00:51:19.678141 waagent[1542]: 2025-09-13T00:51:19.677996Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 13 00:51:19.685865 waagent[1542]: 2025-09-13T00:51:19.685746Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 00:51:19.734185 waagent[1542]: 2025-09-13T00:51:19.686207Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 00:51:19.734185 waagent[1542]: 2025-09-13T00:51:19.687069Z INFO Daemon Daemon Using waagent for provisioning Sep 13 00:51:19.734185 waagent[1542]: 2025-09-13T00:51:19.688622Z INFO Daemon Daemon Activate resource disk Sep 13 00:51:19.734185 waagent[1542]: 2025-09-13T00:51:19.689384Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 13 00:51:19.734185 waagent[1542]: 2025-09-13T00:51:19.698024Z INFO Daemon Daemon Found device: None Sep 13 00:51:19.734185 waagent[1542]: 2025-09-13T00:51:19.699068Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 13 00:51:19.734185 waagent[1542]: 2025-09-13T00:51:19.699973Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 13 00:51:19.734185 waagent[1542]: 2025-09-13T00:51:19.701964Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 00:51:19.734185 waagent[1542]: 2025-09-13T00:51:19.703007Z INFO Daemon Daemon Running default provisioning handler Sep 13 00:51:19.734185 waagent[1542]: 2025-09-13T00:51:19.712826Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 13 00:51:19.734185 waagent[1542]: 2025-09-13T00:51:19.716274Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 00:51:19.734185 waagent[1542]: 2025-09-13T00:51:19.717229Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 00:51:19.734185 waagent[1542]: 2025-09-13T00:51:19.718153Z INFO Daemon Daemon Copying ovf-env.xml Sep 13 00:51:19.929373 waagent[1542]: 2025-09-13T00:51:19.929151Z INFO Daemon Daemon Successfully mounted dvd Sep 13 00:51:20.010240 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 13 00:51:20.048195 waagent[1542]: 2025-09-13T00:51:20.048057Z INFO Daemon Daemon Detect protocol endpoint Sep 13 00:51:20.064246 waagent[1542]: 2025-09-13T00:51:20.048778Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 00:51:20.064246 waagent[1542]: 2025-09-13T00:51:20.049832Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 13 00:51:20.064246 waagent[1542]: 2025-09-13T00:51:20.050717Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 13 00:51:20.064246 waagent[1542]: 2025-09-13T00:51:20.051811Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 13 00:51:20.064246 waagent[1542]: 2025-09-13T00:51:20.052506Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 13 00:51:20.197015 waagent[1542]: 2025-09-13T00:51:20.196892Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 13 00:51:20.205270 waagent[1542]: 2025-09-13T00:51:20.197850Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 13 00:51:20.205270 waagent[1542]: 2025-09-13T00:51:20.198794Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 13 00:51:20.567770 waagent[1542]: 2025-09-13T00:51:20.567627Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 13 00:51:20.577333 waagent[1542]: 2025-09-13T00:51:20.577267Z INFO Daemon Daemon Forcing an update of the goal state.. Sep 13 00:51:20.582700 waagent[1542]: 2025-09-13T00:51:20.577564Z INFO Daemon Daemon Fetching goal state [incarnation 1] Sep 13 00:51:20.650798 waagent[1542]: 2025-09-13T00:51:20.650671Z INFO Daemon Daemon Found private key matching thumbprint 19065FD6F6047182F797BD51D0D9BB7D814CF814 Sep 13 00:51:20.657184 waagent[1542]: 2025-09-13T00:51:20.651272Z INFO Daemon Daemon Fetch goal state completed Sep 13 00:51:20.685336 waagent[1542]: 2025-09-13T00:51:20.685272Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 15855d3b-c128-4b85-a3d7-9326ff45b21a New eTag: 5357123513451030443] Sep 13 00:51:20.693440 waagent[1542]: 2025-09-13T00:51:20.686065Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Sep 13 00:51:20.698224 waagent[1542]: 2025-09-13T00:51:20.698170Z INFO Daemon Daemon Starting provisioning Sep 13 00:51:20.705596 waagent[1542]: 2025-09-13T00:51:20.698448Z INFO Daemon Daemon Handle ovf-env.xml. Sep 13 00:51:20.705596 waagent[1542]: 2025-09-13T00:51:20.699517Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-2e01e92296] Sep 13 00:51:20.720633 waagent[1542]: 2025-09-13T00:51:20.720513Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-2e01e92296] Sep 13 00:51:20.729436 waagent[1542]: 2025-09-13T00:51:20.721239Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 13 00:51:20.729436 waagent[1542]: 2025-09-13T00:51:20.722285Z INFO Daemon Daemon Primary interface is [eth0] Sep 13 00:51:20.736390 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Sep 13 00:51:20.736625 systemd[1]: Stopped systemd-networkd-wait-online.service. Sep 13 00:51:20.736696 systemd[1]: Stopping systemd-networkd-wait-online.service... Sep 13 00:51:20.737017 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:51:20.742084 systemd-networkd[1203]: eth0: DHCPv6 lease lost Sep 13 00:51:20.743455 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:51:20.743650 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:51:20.746154 systemd[1]: Starting systemd-networkd.service... Sep 13 00:51:20.776305 systemd-networkd[1608]: enP50151s1: Link UP Sep 13 00:51:20.776316 systemd-networkd[1608]: enP50151s1: Gained carrier Sep 13 00:51:20.777775 systemd-networkd[1608]: eth0: Link UP Sep 13 00:51:20.777786 systemd-networkd[1608]: eth0: Gained carrier Sep 13 00:51:20.778226 systemd-networkd[1608]: lo: Link UP Sep 13 00:51:20.778233 systemd-networkd[1608]: lo: Gained carrier Sep 13 00:51:20.778517 systemd-networkd[1608]: eth0: Gained IPv6LL Sep 13 00:51:20.779007 systemd-networkd[1608]: Enumeration completed Sep 13 00:51:20.779129 systemd[1]: Started systemd-networkd.service. Sep 13 00:51:20.781193 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:51:20.788070 waagent[1542]: 2025-09-13T00:51:20.783625Z INFO Daemon Daemon Create user account if not exists Sep 13 00:51:20.788070 waagent[1542]: 2025-09-13T00:51:20.784351Z INFO Daemon Daemon User core already exists, skip useradd Sep 13 00:51:20.788070 waagent[1542]: 2025-09-13T00:51:20.785283Z INFO Daemon Daemon Configure sudoer Sep 13 00:51:20.788867 systemd-networkd[1608]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:51:20.803092 systemd-networkd[1608]: eth0: DHCPv4 address 10.200.4.42/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 13 00:51:20.805127 waagent[1542]: 2025-09-13T00:51:20.803274Z INFO Daemon Daemon Configure sshd Sep 13 00:51:20.805127 waagent[1542]: 2025-09-13T00:51:20.803652Z INFO Daemon Daemon Deploy ssh public key. Sep 13 00:51:20.810360 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:51:21.945429 waagent[1542]: 2025-09-13T00:51:21.945338Z INFO Daemon Daemon Provisioning complete Sep 13 00:51:21.957765 waagent[1542]: 2025-09-13T00:51:21.957697Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 13 00:51:21.961160 waagent[1542]: 2025-09-13T00:51:21.961102Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 13 00:51:21.966667 waagent[1542]: 2025-09-13T00:51:21.966607Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Sep 13 00:51:22.229211 waagent[1614]: 2025-09-13T00:51:22.229039Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Sep 13 00:51:22.229944 waagent[1614]: 2025-09-13T00:51:22.229875Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:51:22.230103 waagent[1614]: 2025-09-13T00:51:22.230043Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:51:22.240842 waagent[1614]: 2025-09-13T00:51:22.240764Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Sep 13 00:51:22.241014 waagent[1614]: 2025-09-13T00:51:22.240948Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Sep 13 00:51:22.287254 waagent[1614]: 2025-09-13T00:51:22.287144Z INFO ExtHandler ExtHandler Found private key matching thumbprint 19065FD6F6047182F797BD51D0D9BB7D814CF814 Sep 13 00:51:22.287541 waagent[1614]: 2025-09-13T00:51:22.287487Z INFO ExtHandler ExtHandler Fetch goal state completed Sep 13 00:51:22.299520 waagent[1614]: 2025-09-13T00:51:22.299465Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 682ce23b-694d-4b6e-bb5d-20e2db702b90 New eTag: 5357123513451030443] Sep 13 00:51:22.299995 waagent[1614]: 2025-09-13T00:51:22.299940Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Sep 13 00:51:22.414960 waagent[1614]: 2025-09-13T00:51:22.414823Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 00:51:22.443974 waagent[1614]: 2025-09-13T00:51:22.443881Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1614 Sep 13 00:51:22.447292 waagent[1614]: 2025-09-13T00:51:22.447226Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 00:51:22.448437 waagent[1614]: 2025-09-13T00:51:22.448378Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 13 00:51:22.604176 waagent[1614]: 2025-09-13T00:51:22.604117Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 00:51:22.604569 waagent[1614]: 2025-09-13T00:51:22.604515Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 00:51:22.611992 waagent[1614]: 2025-09-13T00:51:22.611933Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 00:51:22.612485 waagent[1614]: 2025-09-13T00:51:22.612426Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 13 00:51:22.613525 waagent[1614]: 2025-09-13T00:51:22.613462Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Sep 13 00:51:22.614808 waagent[1614]: 2025-09-13T00:51:22.614752Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 00:51:22.615214 waagent[1614]: 2025-09-13T00:51:22.615155Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:51:22.615364 waagent[1614]: 2025-09-13T00:51:22.615318Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:51:22.615855 waagent[1614]: 2025-09-13T00:51:22.615802Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 00:51:22.616143 waagent[1614]: 2025-09-13T00:51:22.616089Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 00:51:22.616143 waagent[1614]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 00:51:22.616143 waagent[1614]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 00:51:22.616143 waagent[1614]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 00:51:22.616143 waagent[1614]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:51:22.616143 waagent[1614]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:51:22.616143 waagent[1614]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:51:22.619070 waagent[1614]: 2025-09-13T00:51:22.618865Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 00:51:22.619210 waagent[1614]: 2025-09-13T00:51:22.619131Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:51:22.619602 waagent[1614]: 2025-09-13T00:51:22.619550Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:51:22.620303 waagent[1614]: 2025-09-13T00:51:22.620247Z INFO EnvHandler ExtHandler Configure routes Sep 13 00:51:22.620624 waagent[1614]: 2025-09-13T00:51:22.620574Z INFO EnvHandler ExtHandler Gateway:None Sep 13 00:51:22.621096 waagent[1614]: 2025-09-13T00:51:22.621012Z INFO EnvHandler ExtHandler Routes:None Sep 13 00:51:22.621936 waagent[1614]: 2025-09-13T00:51:22.621885Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 00:51:22.622247 waagent[1614]: 2025-09-13T00:51:22.622190Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 00:51:22.622989 waagent[1614]: 2025-09-13T00:51:22.622930Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 00:51:22.623192 waagent[1614]: 2025-09-13T00:51:22.623135Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 00:51:22.623341 waagent[1614]: 2025-09-13T00:51:22.623291Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 00:51:22.642107 waagent[1614]: 2025-09-13T00:51:22.642026Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Sep 13 00:51:22.642625 waagent[1614]: 2025-09-13T00:51:22.642578Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 13 00:51:22.643429 waagent[1614]: 2025-09-13T00:51:22.643372Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Sep 13 00:51:22.672508 waagent[1614]: 2025-09-13T00:51:22.672437Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Sep 13 00:51:22.682353 waagent[1614]: 2025-09-13T00:51:22.682283Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1608' Sep 13 00:51:22.802050 waagent[1614]: 2025-09-13T00:51:22.792344Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 00:51:22.802050 waagent[1614]: Executing ['ip', '-a', '-o', 'link']: Sep 13 00:51:22.802050 waagent[1614]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 00:51:22.802050 waagent[1614]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2e:73:fa brd ff:ff:ff:ff:ff:ff Sep 13 00:51:22.802050 waagent[1614]: 3: enP50151s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2e:73:fa brd ff:ff:ff:ff:ff:ff\ altname enP50151p0s2 Sep 13 00:51:22.802050 waagent[1614]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 00:51:22.802050 waagent[1614]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 00:51:22.802050 waagent[1614]: 2: eth0 inet 10.200.4.42/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 00:51:22.802050 waagent[1614]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 00:51:22.802050 waagent[1614]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 13 00:51:22.802050 waagent[1614]: 2: eth0 inet6 fe80::7e1e:52ff:fe2e:73fa/64 scope link \ valid_lft forever preferred_lft forever Sep 13 00:51:22.903514 waagent[1614]: 2025-09-13T00:51:22.903392Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Sep 13 00:51:22.969884 waagent[1542]: 2025-09-13T00:51:22.969762Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Sep 13 00:51:22.976153 waagent[1542]: 2025-09-13T00:51:22.976098Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Sep 13 00:51:24.142701 waagent[1642]: 2025-09-13T00:51:24.142591Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Sep 13 00:51:24.143442 waagent[1642]: 2025-09-13T00:51:24.143374Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Sep 13 00:51:24.143598 waagent[1642]: 2025-09-13T00:51:24.143547Z INFO ExtHandler ExtHandler Python: 3.9.16 Sep 13 00:51:24.143755 waagent[1642]: 2025-09-13T00:51:24.143710Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Sep 13 00:51:24.158300 waagent[1642]: 2025-09-13T00:51:24.158200Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 00:51:24.158719 waagent[1642]: 2025-09-13T00:51:24.158660Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:51:24.158895 waagent[1642]: 2025-09-13T00:51:24.158847Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:51:24.159137 waagent[1642]: 2025-09-13T00:51:24.159087Z INFO ExtHandler ExtHandler Initializing the goal state... Sep 13 00:51:24.170588 waagent[1642]: 2025-09-13T00:51:24.170514Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 13 00:51:24.178232 waagent[1642]: 2025-09-13T00:51:24.178166Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Sep 13 00:51:24.179127 waagent[1642]: 2025-09-13T00:51:24.179071Z INFO ExtHandler Sep 13 00:51:24.179293 waagent[1642]: 2025-09-13T00:51:24.179245Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7b4ae8c3-7fe5-4f8d-8834-c4f8cf54b802 eTag: 5357123513451030443 source: Fabric] Sep 13 00:51:24.179961 waagent[1642]: 2025-09-13T00:51:24.179904Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 13 00:51:24.181021 waagent[1642]: 2025-09-13T00:51:24.180958Z INFO ExtHandler Sep 13 00:51:24.181189 waagent[1642]: 2025-09-13T00:51:24.181139Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 13 00:51:24.187107 waagent[1642]: 2025-09-13T00:51:24.187053Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 13 00:51:24.187577 waagent[1642]: 2025-09-13T00:51:24.187527Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 13 00:51:24.214938 waagent[1642]: 2025-09-13T00:51:24.214881Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Sep 13 00:51:24.264730 waagent[1642]: 2025-09-13T00:51:24.264621Z INFO ExtHandler Downloaded certificate {'thumbprint': '19065FD6F6047182F797BD51D0D9BB7D814CF814', 'hasPrivateKey': True} Sep 13 00:51:24.265915 waagent[1642]: 2025-09-13T00:51:24.265851Z INFO ExtHandler Fetch goal state from WireServer completed Sep 13 00:51:24.266717 waagent[1642]: 2025-09-13T00:51:24.266659Z INFO ExtHandler ExtHandler Goal state initialization completed. Sep 13 00:51:24.282988 waagent[1642]: 2025-09-13T00:51:24.282886Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Sep 13 00:51:24.290485 waagent[1642]: 2025-09-13T00:51:24.290393Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 13 00:51:24.293910 waagent[1642]: 2025-09-13T00:51:24.293814Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Sep 13 00:51:24.294163 waagent[1642]: 2025-09-13T00:51:24.294108Z INFO ExtHandler ExtHandler Checking state of the firewall Sep 13 00:51:24.443557 waagent[1642]: 2025-09-13T00:51:24.443384Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: Sep 13 00:51:24.443557 waagent[1642]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:51:24.443557 waagent[1642]: pkts bytes target prot opt in out source destination Sep 13 00:51:24.443557 waagent[1642]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:51:24.443557 waagent[1642]: pkts bytes target prot opt in out source destination Sep 13 00:51:24.443557 waagent[1642]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:51:24.443557 waagent[1642]: pkts bytes target prot opt in out source destination Sep 13 00:51:24.443557 waagent[1642]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 13 00:51:24.443557 waagent[1642]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 00:51:24.443557 waagent[1642]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 13 00:51:24.444643 waagent[1642]: 2025-09-13T00:51:24.444574Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Sep 13 00:51:24.447190 waagent[1642]: 2025-09-13T00:51:24.447091Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Sep 13 00:51:24.447459 waagent[1642]: 2025-09-13T00:51:24.447405Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 00:51:24.447812 waagent[1642]: 2025-09-13T00:51:24.447757Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 00:51:24.455177 waagent[1642]: 2025-09-13T00:51:24.455121Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 00:51:24.455661 waagent[1642]: 2025-09-13T00:51:24.455605Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 13 00:51:24.463025 waagent[1642]: 2025-09-13T00:51:24.462953Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1642 Sep 13 00:51:24.466023 waagent[1642]: 2025-09-13T00:51:24.465956Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 00:51:24.466754 waagent[1642]: 2025-09-13T00:51:24.466695Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Sep 13 00:51:24.467577 waagent[1642]: 2025-09-13T00:51:24.467518Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 13 00:51:24.470004 waagent[1642]: 2025-09-13T00:51:24.469942Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Sep 13 00:51:24.470343 waagent[1642]: 2025-09-13T00:51:24.470288Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 13 00:51:24.471599 waagent[1642]: 2025-09-13T00:51:24.471544Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 00:51:24.471991 waagent[1642]: 2025-09-13T00:51:24.471935Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:51:24.472174 waagent[1642]: 2025-09-13T00:51:24.472125Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:51:24.472674 waagent[1642]: 2025-09-13T00:51:24.472625Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 00:51:24.472975 waagent[1642]: 2025-09-13T00:51:24.472922Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 00:51:24.472975 waagent[1642]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 00:51:24.472975 waagent[1642]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 00:51:24.472975 waagent[1642]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 00:51:24.472975 waagent[1642]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:51:24.472975 waagent[1642]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:51:24.472975 waagent[1642]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:51:24.475240 waagent[1642]: 2025-09-13T00:51:24.475132Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 00:51:24.476150 waagent[1642]: 2025-09-13T00:51:24.476093Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 00:51:24.476676 waagent[1642]: 2025-09-13T00:51:24.476620Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 00:51:24.477548 waagent[1642]: 2025-09-13T00:51:24.477496Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:51:24.480230 waagent[1642]: 2025-09-13T00:51:24.480124Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:51:24.481340 waagent[1642]: 2025-09-13T00:51:24.481244Z INFO EnvHandler ExtHandler Configure routes Sep 13 00:51:24.483931 waagent[1642]: 2025-09-13T00:51:24.483818Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 00:51:24.484322 waagent[1642]: 2025-09-13T00:51:24.484248Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 00:51:24.484812 waagent[1642]: 2025-09-13T00:51:24.484757Z INFO EnvHandler ExtHandler Gateway:None Sep 13 00:51:24.484979 waagent[1642]: 2025-09-13T00:51:24.484933Z INFO EnvHandler ExtHandler Routes:None Sep 13 00:51:24.488861 waagent[1642]: 2025-09-13T00:51:24.488695Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 00:51:24.490714 waagent[1642]: 2025-09-13T00:51:24.490641Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 00:51:24.490714 waagent[1642]: Executing ['ip', '-a', '-o', 'link']: Sep 13 00:51:24.490714 waagent[1642]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 00:51:24.490714 waagent[1642]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2e:73:fa brd ff:ff:ff:ff:ff:ff Sep 13 00:51:24.490714 waagent[1642]: 3: enP50151s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2e:73:fa brd ff:ff:ff:ff:ff:ff\ altname enP50151p0s2 Sep 13 00:51:24.490714 waagent[1642]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 00:51:24.490714 waagent[1642]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 00:51:24.490714 waagent[1642]: 2: eth0 inet 10.200.4.42/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 00:51:24.490714 waagent[1642]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 00:51:24.490714 waagent[1642]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 13 00:51:24.490714 waagent[1642]: 2: eth0 inet6 fe80::7e1e:52ff:fe2e:73fa/64 scope link \ valid_lft forever preferred_lft forever Sep 13 00:51:24.507511 waagent[1642]: 2025-09-13T00:51:24.507446Z INFO ExtHandler ExtHandler Downloading agent manifest Sep 13 00:51:24.519632 waagent[1642]: 2025-09-13T00:51:24.519579Z INFO ExtHandler ExtHandler Sep 13 00:51:24.520537 waagent[1642]: 2025-09-13T00:51:24.520488Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f3d02905-faf6-455f-8d75-45a0255831f2 correlation abd808db-8150-4285-ae5e-2c247e36b828 created: 2025-09-13T00:49:39.974770Z] Sep 13 00:51:24.524087 waagent[1642]: 2025-09-13T00:51:24.524017Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 13 00:51:24.528846 waagent[1642]: 2025-09-13T00:51:24.528793Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 9 ms] Sep 13 00:51:24.547627 waagent[1642]: 2025-09-13T00:51:24.547570Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 13 00:51:24.558947 waagent[1642]: 2025-09-13T00:51:24.558881Z INFO ExtHandler ExtHandler Looking for existing remote access users. Sep 13 00:51:24.565745 waagent[1642]: 2025-09-13T00:51:24.565686Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9A5E44A6-C55A-491A-A534-1DBAAE280089;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Sep 13 00:51:24.565994 waagent[1642]: 2025-09-13T00:51:24.565943Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 13 00:51:27.179936 systemd[1]: Created slice system-sshd.slice. Sep 13 00:51:27.181637 systemd[1]: Started sshd@0-10.200.4.42:22-10.200.16.10:44062.service. Sep 13 00:51:28.009543 sshd[1684]: Accepted publickey for core from 10.200.16.10 port 44062 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:51:28.010788 sshd[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:28.015207 systemd[1]: Started session-3.scope. Sep 13 00:51:28.015743 systemd-logind[1431]: New session 3 of user core. Sep 13 00:51:28.529486 systemd[1]: Started sshd@1-10.200.4.42:22-10.200.16.10:44072.service. Sep 13 00:51:29.016496 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:51:29.016803 systemd[1]: Stopped kubelet.service. Sep 13 00:51:29.018406 systemd[1]: Starting kubelet.service... Sep 13 00:51:29.110501 systemd[1]: Started kubelet.service. Sep 13 00:51:29.121513 sshd[1689]: Accepted publickey for core from 10.200.16.10 port 44072 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:51:29.121339 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:29.126439 systemd[1]: Started session-4.scope. Sep 13 00:51:29.127995 systemd-logind[1431]: New session 4 of user core. Sep 13 00:51:29.542508 sshd[1689]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:29.545059 systemd[1]: sshd@1-10.200.4.42:22-10.200.16.10:44072.service: Deactivated successfully. Sep 13 00:51:29.545936 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:51:29.547490 systemd-logind[1431]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:51:29.548436 systemd-logind[1431]: Removed session 4. Sep 13 00:51:29.639757 systemd[1]: Started sshd@2-10.200.4.42:22-10.200.16.10:44078.service. Sep 13 00:51:29.788348 kubelet[1695]: E0913 00:51:29.788302 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:51:29.790007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:51:29.790170 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:51:30.234742 sshd[1705]: Accepted publickey for core from 10.200.16.10 port 44078 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:51:30.236043 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:30.240475 systemd[1]: Started session-5.scope. Sep 13 00:51:30.241009 systemd-logind[1431]: New session 5 of user core. Sep 13 00:51:30.651778 sshd[1705]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:30.654316 systemd[1]: sshd@2-10.200.4.42:22-10.200.16.10:44078.service: Deactivated successfully. Sep 13 00:51:30.655086 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:51:30.655679 systemd-logind[1431]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:51:30.656407 systemd-logind[1431]: Removed session 5. Sep 13 00:51:30.750075 systemd[1]: Started sshd@3-10.200.4.42:22-10.200.16.10:43142.service. Sep 13 00:51:31.338815 sshd[1711]: Accepted publickey for core from 10.200.16.10 port 43142 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:51:31.340137 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:31.344595 systemd[1]: Started session-6.scope. Sep 13 00:51:31.345145 systemd-logind[1431]: New session 6 of user core. Sep 13 00:51:31.765958 sshd[1711]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:31.768814 systemd-logind[1431]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:51:31.769014 systemd[1]: sshd@3-10.200.4.42:22-10.200.16.10:43142.service: Deactivated successfully. Sep 13 00:51:31.769817 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:51:31.770511 systemd-logind[1431]: Removed session 6. Sep 13 00:51:31.866061 systemd[1]: Started sshd@4-10.200.4.42:22-10.200.16.10:43150.service. Sep 13 00:51:32.456807 sshd[1717]: Accepted publickey for core from 10.200.16.10 port 43150 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:51:32.458099 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:51:32.462562 systemd[1]: Started session-7.scope. Sep 13 00:51:32.463128 systemd-logind[1431]: New session 7 of user core. Sep 13 00:51:33.183667 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:51:33.183962 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:51:33.220735 systemd[1]: Starting docker.service... Sep 13 00:51:33.272413 env[1730]: time="2025-09-13T00:51:33.272376853Z" level=info msg="Starting up" Sep 13 00:51:33.273762 env[1730]: time="2025-09-13T00:51:33.273738814Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:51:33.273861 env[1730]: time="2025-09-13T00:51:33.273850410Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:51:33.273915 env[1730]: time="2025-09-13T00:51:33.273905109Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:51:33.273951 env[1730]: time="2025-09-13T00:51:33.273944308Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:51:33.275882 env[1730]: time="2025-09-13T00:51:33.275857252Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:51:33.275882 env[1730]: time="2025-09-13T00:51:33.275873852Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:51:33.276004 env[1730]: time="2025-09-13T00:51:33.275890551Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:51:33.276004 env[1730]: time="2025-09-13T00:51:33.275902651Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:51:33.283380 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1771660615-merged.mount: Deactivated successfully. Sep 13 00:51:33.356983 env[1730]: time="2025-09-13T00:51:33.356945904Z" level=info msg="Loading containers: start." Sep 13 00:51:33.569063 kernel: Initializing XFRM netlink socket Sep 13 00:51:33.620387 env[1730]: time="2025-09-13T00:51:33.620351074Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:51:33.787892 systemd-networkd[1608]: docker0: Link UP Sep 13 00:51:33.813298 env[1730]: time="2025-09-13T00:51:33.813256387Z" level=info msg="Loading containers: done." Sep 13 00:51:33.831225 env[1730]: time="2025-09-13T00:51:33.830916776Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:51:33.831225 env[1730]: time="2025-09-13T00:51:33.831155969Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:51:33.831456 env[1730]: time="2025-09-13T00:51:33.831405761Z" level=info msg="Daemon has completed initialization" Sep 13 00:51:33.859748 systemd[1]: Started docker.service. Sep 13 00:51:33.870257 env[1730]: time="2025-09-13T00:51:33.870195738Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:51:38.217551 env[1445]: time="2025-09-13T00:51:38.217505040Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 00:51:39.015337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount382426100.mount: Deactivated successfully. Sep 13 00:51:39.796462 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:51:39.796685 systemd[1]: Stopped kubelet.service. Sep 13 00:51:39.798243 systemd[1]: Starting kubelet.service... Sep 13 00:51:40.230704 systemd[1]: Started kubelet.service. Sep 13 00:51:40.689825 kubelet[1849]: E0913 00:51:40.689778 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:51:40.691333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:51:40.691497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:51:41.705073 env[1445]: time="2025-09-13T00:51:41.705014523Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:41.713083 env[1445]: time="2025-09-13T00:51:41.713050084Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:41.716689 env[1445]: time="2025-09-13T00:51:41.716654922Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:41.719995 env[1445]: time="2025-09-13T00:51:41.719967565Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:41.720650 env[1445]: time="2025-09-13T00:51:41.720619053Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 13 00:51:41.721418 env[1445]: time="2025-09-13T00:51:41.721390940Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 00:51:42.418069 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 13 00:51:43.511041 env[1445]: time="2025-09-13T00:51:43.510983753Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:43.516511 env[1445]: time="2025-09-13T00:51:43.516471369Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:43.519506 env[1445]: time="2025-09-13T00:51:43.519475624Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:43.522818 env[1445]: time="2025-09-13T00:51:43.522789473Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:43.523531 env[1445]: time="2025-09-13T00:51:43.523501262Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 13 00:51:43.524180 env[1445]: time="2025-09-13T00:51:43.524157352Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 00:51:45.210770 env[1445]: time="2025-09-13T00:51:45.210722169Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.215987 env[1445]: time="2025-09-13T00:51:45.215944699Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.219604 env[1445]: time="2025-09-13T00:51:45.219573251Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.223448 env[1445]: time="2025-09-13T00:51:45.223422100Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:45.224063 env[1445]: time="2025-09-13T00:51:45.224019192Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 13 00:51:45.224850 env[1445]: time="2025-09-13T00:51:45.224820381Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 00:51:46.714442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2270575909.mount: Deactivated successfully. Sep 13 00:51:47.443257 env[1445]: time="2025-09-13T00:51:47.443208212Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:47.447756 env[1445]: time="2025-09-13T00:51:47.447722659Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:47.451079 env[1445]: time="2025-09-13T00:51:47.451049320Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:47.453603 env[1445]: time="2025-09-13T00:51:47.453575591Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:47.453920 env[1445]: time="2025-09-13T00:51:47.453892587Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 13 00:51:47.454413 env[1445]: time="2025-09-13T00:51:47.454391381Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 00:51:48.099948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157527793.mount: Deactivated successfully. Sep 13 00:51:49.443501 env[1445]: time="2025-09-13T00:51:49.443445201Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:49.448802 env[1445]: time="2025-09-13T00:51:49.448766346Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:49.452755 env[1445]: time="2025-09-13T00:51:49.452658706Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:49.456681 env[1445]: time="2025-09-13T00:51:49.456651465Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:49.457325 env[1445]: time="2025-09-13T00:51:49.457294758Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 13 00:51:49.458538 env[1445]: time="2025-09-13T00:51:49.458507046Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:51:50.063280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2643323193.mount: Deactivated successfully. Sep 13 00:51:50.082245 env[1445]: time="2025-09-13T00:51:50.082195764Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:50.087787 env[1445]: time="2025-09-13T00:51:50.087755210Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:50.090347 env[1445]: time="2025-09-13T00:51:50.090318786Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:50.093815 env[1445]: time="2025-09-13T00:51:50.093784552Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:50.094302 env[1445]: time="2025-09-13T00:51:50.094270847Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:51:50.094877 env[1445]: time="2025-09-13T00:51:50.094846642Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 00:51:50.691067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3010508947.mount: Deactivated successfully. Sep 13 00:51:50.796540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 00:51:50.796761 systemd[1]: Stopped kubelet.service. Sep 13 00:51:50.798573 systemd[1]: Starting kubelet.service... Sep 13 00:51:51.220214 systemd[1]: Started kubelet.service. Sep 13 00:51:51.635334 kubelet[1858]: E0913 00:51:51.635283 1858 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:51:51.636952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:51:51.637090 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:51:51.884697 update_engine[1432]: I0913 00:51:51.884109 1432 update_attempter.cc:509] Updating boot flags... Sep 13 00:51:54.528454 env[1445]: time="2025-09-13T00:51:54.528402607Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:54.534074 env[1445]: time="2025-09-13T00:51:54.534025965Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:54.537706 env[1445]: time="2025-09-13T00:51:54.537675538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:54.540742 env[1445]: time="2025-09-13T00:51:54.540711615Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:51:54.542202 env[1445]: time="2025-09-13T00:51:54.542164604Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 13 00:51:57.568808 systemd[1]: Stopped kubelet.service. Sep 13 00:51:57.571295 systemd[1]: Starting kubelet.service... Sep 13 00:51:57.612435 systemd[1]: Reloading. Sep 13 00:51:57.721325 /usr/lib/systemd/system-generators/torcx-generator[1975]: time="2025-09-13T00:51:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:51:57.721365 /usr/lib/systemd/system-generators/torcx-generator[1975]: time="2025-09-13T00:51:57Z" level=info msg="torcx already run" Sep 13 00:51:57.809651 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:51:57.809673 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:51:57.824427 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:51:57.910842 systemd[1]: Started kubelet.service. Sep 13 00:51:57.912907 systemd[1]: Stopping kubelet.service... Sep 13 00:51:57.913577 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:51:57.913766 systemd[1]: Stopped kubelet.service. Sep 13 00:51:57.915304 systemd[1]: Starting kubelet.service... Sep 13 00:51:58.332243 systemd[1]: Started kubelet.service. Sep 13 00:51:58.366964 kubelet[2045]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:51:58.366964 kubelet[2045]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:51:58.366964 kubelet[2045]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:51:58.367444 kubelet[2045]: I0913 00:51:58.367019 2045 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:51:59.070201 kubelet[2045]: I0913 00:51:59.070160 2045 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:51:59.070201 kubelet[2045]: I0913 00:51:59.070187 2045 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:51:59.070460 kubelet[2045]: I0913 00:51:59.070443 2045 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:51:59.319254 kubelet[2045]: I0913 00:51:59.318969 2045 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:51:59.335157 kubelet[2045]: E0913 00:51:59.334850 2045 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:51:59.361952 kubelet[2045]: E0913 00:51:59.361921 2045 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:51:59.362133 kubelet[2045]: I0913 00:51:59.362122 2045 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:51:59.366140 kubelet[2045]: I0913 00:51:59.366118 2045 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:51:59.366359 kubelet[2045]: I0913 00:51:59.366332 2045 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:51:59.366519 kubelet[2045]: I0913 00:51:59.366356 2045 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-2e01e92296","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:51:59.366675 kubelet[2045]: I0913 00:51:59.366526 2045 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:51:59.366675 kubelet[2045]: I0913 00:51:59.366538 2045 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:51:59.367459 kubelet[2045]: I0913 00:51:59.367441 2045 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:51:59.372007 kubelet[2045]: I0913 00:51:59.371972 2045 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:51:59.372092 kubelet[2045]: I0913 00:51:59.372012 2045 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:51:59.372092 kubelet[2045]: I0913 00:51:59.372049 2045 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:51:59.372092 kubelet[2045]: I0913 00:51:59.372066 2045 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:51:59.397982 kubelet[2045]: E0913 00:51:59.397176 2045 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-2e01e92296&limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:51:59.397982 kubelet[2045]: I0913 00:51:59.397588 2045 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:51:59.398301 kubelet[2045]: I0913 00:51:59.398281 2045 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:51:59.400287 kubelet[2045]: W0913 00:51:59.400267 2045 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:51:59.403150 kubelet[2045]: I0913 00:51:59.403131 2045 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:51:59.403226 kubelet[2045]: I0913 00:51:59.403183 2045 server.go:1289] "Started kubelet" Sep 13 00:51:59.411019 kubelet[2045]: E0913 00:51:59.410988 2045 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:51:59.417103 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:51:59.418073 kubelet[2045]: E0913 00:51:59.413187 2045 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.42:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-2e01e92296.1864b151c8f2c1c3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-2e01e92296,UID:ci-3510.3.8-n-2e01e92296,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-2e01e92296,},FirstTimestamp:2025-09-13 00:51:59.403155907 +0000 UTC m=+1.065761897,LastTimestamp:2025-09-13 00:51:59.403155907 +0000 UTC m=+1.065761897,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-2e01e92296,}" Sep 13 00:51:59.418647 kubelet[2045]: I0913 00:51:59.418632 2045 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:51:59.420309 kubelet[2045]: I0913 00:51:59.420291 2045 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:51:59.420769 kubelet[2045]: I0913 00:51:59.420014 2045 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:51:59.421863 kubelet[2045]: I0913 00:51:59.421842 2045 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:51:59.422581 kubelet[2045]: I0913 00:51:59.422565 2045 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:51:59.422892 kubelet[2045]: E0913 00:51:59.422876 2045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:51:59.422980 kubelet[2045]: I0913 00:51:59.422867 2045 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:51:59.423244 kubelet[2045]: I0913 00:51:59.423230 2045 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:51:59.424374 kubelet[2045]: I0913 00:51:59.424151 2045 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:51:59.424374 kubelet[2045]: I0913 00:51:59.424221 2045 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:51:59.425713 kubelet[2045]: I0913 00:51:59.425698 2045 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:51:59.425871 kubelet[2045]: I0913 00:51:59.425859 2045 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:51:59.426353 kubelet[2045]: E0913 00:51:59.426331 2045 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:51:59.426533 kubelet[2045]: E0913 00:51:59.426503 2045 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-2e01e92296?timeout=10s\": dial tcp 10.200.4.42:6443: connect: connection refused" interval="200ms" Sep 13 00:51:59.426719 kubelet[2045]: I0913 00:51:59.426706 2045 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:51:59.449935 kubelet[2045]: E0913 00:51:59.449910 2045 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:51:59.475460 kubelet[2045]: I0913 00:51:59.475422 2045 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:51:59.476867 kubelet[2045]: I0913 00:51:59.476843 2045 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:51:59.476867 kubelet[2045]: I0913 00:51:59.476868 2045 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:51:59.476992 kubelet[2045]: I0913 00:51:59.476890 2045 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:51:59.476992 kubelet[2045]: I0913 00:51:59.476898 2045 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:51:59.476992 kubelet[2045]: E0913 00:51:59.476941 2045 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:51:59.477722 kubelet[2045]: E0913 00:51:59.477689 2045 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:51:59.508041 kubelet[2045]: I0913 00:51:59.508011 2045 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:51:59.508041 kubelet[2045]: I0913 00:51:59.508027 2045 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:51:59.508188 kubelet[2045]: I0913 00:51:59.508056 2045 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:51:59.517673 kubelet[2045]: I0913 00:51:59.517652 2045 policy_none.go:49] "None policy: Start" Sep 13 00:51:59.517673 kubelet[2045]: I0913 00:51:59.517674 2045 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:51:59.517793 kubelet[2045]: I0913 00:51:59.517688 2045 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:51:59.523128 kubelet[2045]: E0913 00:51:59.523003 2045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:51:59.525018 systemd[1]: Created slice kubepods.slice. Sep 13 00:51:59.529847 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 00:51:59.532504 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 00:51:59.537728 kubelet[2045]: E0913 00:51:59.537710 2045 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:51:59.539422 kubelet[2045]: I0913 00:51:59.539410 2045 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:51:59.539653 kubelet[2045]: I0913 00:51:59.539518 2045 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:51:59.540756 kubelet[2045]: E0913 00:51:59.540527 2045 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:51:59.540841 kubelet[2045]: E0913 00:51:59.540777 2045 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:51:59.540975 kubelet[2045]: I0913 00:51:59.540954 2045 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:51:59.588903 systemd[1]: Created slice kubepods-burstable-pod26d33cc1ae63ab54972857fdb58ddb2e.slice. Sep 13 00:51:59.593717 kubelet[2045]: E0913 00:51:59.593682 2045 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-2e01e92296\" not found" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:51:59.596975 systemd[1]: Created slice kubepods-burstable-poddce761224c0cc7506b3fbf3d1bffd98a.slice. Sep 13 00:51:59.599727 kubelet[2045]: E0913 00:51:59.599710 2045 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-2e01e92296\" not found" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:51:59.606778 systemd[1]: Created slice kubepods-burstable-pod4ca11cde3b4815f140fe43ea1b398810.slice. Sep 13 00:51:59.608424 kubelet[2045]: E0913 00:51:59.608403 2045 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-2e01e92296\" not found" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:51:59.627037 kubelet[2045]: I0913 00:51:59.626994 2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dce761224c0cc7506b3fbf3d1bffd98a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-2e01e92296\" (UID: \"dce761224c0cc7506b3fbf3d1bffd98a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-2e01e92296" Sep 13 00:51:59.627145 kubelet[2045]: I0913 00:51:59.627057 2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dce761224c0cc7506b3fbf3d1bffd98a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-2e01e92296\" (UID: \"dce761224c0cc7506b3fbf3d1bffd98a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-2e01e92296" Sep 13 00:51:59.627145 kubelet[2045]: I0913 00:51:59.627086 2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dce761224c0cc7506b3fbf3d1bffd98a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-2e01e92296\" (UID: \"dce761224c0cc7506b3fbf3d1bffd98a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-2e01e92296" Sep 13 00:51:59.627145 kubelet[2045]: I0913 00:51:59.627121 2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ca11cde3b4815f140fe43ea1b398810-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-2e01e92296\" (UID: \"4ca11cde3b4815f140fe43ea1b398810\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-2e01e92296" Sep 13 00:51:59.627257 kubelet[2045]: I0913 00:51:59.627144 2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26d33cc1ae63ab54972857fdb58ddb2e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-2e01e92296\" (UID: \"26d33cc1ae63ab54972857fdb58ddb2e\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-2e01e92296" Sep 13 00:51:59.627257 kubelet[2045]: I0913 00:51:59.627167 2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dce761224c0cc7506b3fbf3d1bffd98a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-2e01e92296\" (UID: \"dce761224c0cc7506b3fbf3d1bffd98a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-2e01e92296" Sep 13 00:51:59.627257 kubelet[2045]: I0913 00:51:59.627216 2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dce761224c0cc7506b3fbf3d1bffd98a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-2e01e92296\" (UID: \"dce761224c0cc7506b3fbf3d1bffd98a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-2e01e92296" Sep 13 00:51:59.627257 kubelet[2045]: I0913 00:51:59.627242 2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26d33cc1ae63ab54972857fdb58ddb2e-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-2e01e92296\" (UID: \"26d33cc1ae63ab54972857fdb58ddb2e\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-2e01e92296" Sep 13 00:51:59.627404 kubelet[2045]: I0913 00:51:59.627281 2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26d33cc1ae63ab54972857fdb58ddb2e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-2e01e92296\" (UID: \"26d33cc1ae63ab54972857fdb58ddb2e\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-2e01e92296" Sep 13 00:51:59.627628 kubelet[2045]: E0913 00:51:59.627604 2045 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-2e01e92296?timeout=10s\": dial tcp 10.200.4.42:6443: connect: connection refused" interval="400ms" Sep 13 00:51:59.643063 kubelet[2045]: I0913 00:51:59.643044 2045 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:51:59.643394 kubelet[2045]: E0913 00:51:59.643360 2045 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.42:6443/api/v1/nodes\": dial tcp 10.200.4.42:6443: connect: connection refused" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:51:59.845909 kubelet[2045]: I0913 00:51:59.845810 2045 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:51:59.846669 kubelet[2045]: E0913 00:51:59.846644 2045 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.42:6443/api/v1/nodes\": dial tcp 10.200.4.42:6443: connect: connection refused" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:51:59.895341 env[1445]: time="2025-09-13T00:51:59.895299745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-2e01e92296,Uid:26d33cc1ae63ab54972857fdb58ddb2e,Namespace:kube-system,Attempt:0,}" Sep 13 00:51:59.901010 env[1445]: time="2025-09-13T00:51:59.900976214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-2e01e92296,Uid:dce761224c0cc7506b3fbf3d1bffd98a,Namespace:kube-system,Attempt:0,}" Sep 13 00:51:59.909555 env[1445]: time="2025-09-13T00:51:59.909522568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-2e01e92296,Uid:4ca11cde3b4815f140fe43ea1b398810,Namespace:kube-system,Attempt:0,}" Sep 13 00:52:00.028353 kubelet[2045]: E0913 00:52:00.028292 2045 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-2e01e92296?timeout=10s\": dial tcp 10.200.4.42:6443: connect: connection refused" interval="800ms" Sep 13 00:52:00.248887 kubelet[2045]: I0913 00:52:00.248594 2045 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:00.249144 kubelet[2045]: E0913 00:52:00.249097 2045 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.42:6443/api/v1/nodes\": dial tcp 10.200.4.42:6443: connect: connection refused" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:01.274801 kubelet[2045]: E0913 00:52:00.537478 2045 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:52:01.274801 kubelet[2045]: E0913 00:52:00.661972 2045 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-2e01e92296&limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:52:01.274801 kubelet[2045]: E0913 00:52:00.724600 2045 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:52:01.274801 kubelet[2045]: E0913 00:52:00.829977 2045 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-2e01e92296?timeout=10s\": dial tcp 10.200.4.42:6443: connect: connection refused" interval="1.6s" Sep 13 00:52:01.274801 kubelet[2045]: E0913 00:52:00.991246 2045 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:52:01.274801 kubelet[2045]: I0913 00:52:01.051251 2045 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:01.275344 kubelet[2045]: E0913 00:52:01.055646 2045 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.42:6443/api/v1/nodes\": dial tcp 10.200.4.42:6443: connect: connection refused" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:01.390228 kubelet[2045]: E0913 00:52:01.390183 2045 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:52:02.431361 kubelet[2045]: E0913 00:52:02.431316 2045 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-2e01e92296?timeout=10s\": dial tcp 10.200.4.42:6443: connect: connection refused" interval="3.2s" Sep 13 00:52:02.535348 kubelet[2045]: E0913 00:52:02.535307 2045 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:52:02.658199 kubelet[2045]: I0913 00:52:02.658165 2045 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:02.658493 kubelet[2045]: E0913 00:52:02.658465 2045 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.42:6443/api/v1/nodes\": dial tcp 10.200.4.42:6443: connect: connection refused" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:02.659855 kubelet[2045]: E0913 00:52:02.659828 2045 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:52:02.827386 kubelet[2045]: E0913 00:52:02.827326 2045 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:52:03.772566 kubelet[2045]: E0913 00:52:03.772524 2045 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-2e01e92296&limit=500&resourceVersion=0\": dial tcp 10.200.4.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:52:04.336335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2768437359.mount: Deactivated successfully. Sep 13 00:52:04.361514 env[1445]: time="2025-09-13T00:52:04.361468418Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:04.363969 env[1445]: time="2025-09-13T00:52:04.363939917Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:04.370706 env[1445]: time="2025-09-13T00:52:04.370675015Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:04.374205 env[1445]: time="2025-09-13T00:52:04.374177714Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:04.377281 env[1445]: time="2025-09-13T00:52:04.377251114Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:04.380429 env[1445]: time="2025-09-13T00:52:04.380395013Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:04.383502 env[1445]: time="2025-09-13T00:52:04.383469612Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:04.386555 env[1445]: time="2025-09-13T00:52:04.386525811Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:04.391344 env[1445]: time="2025-09-13T00:52:04.391314410Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:04.393618 env[1445]: time="2025-09-13T00:52:04.393589709Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:04.401733 env[1445]: time="2025-09-13T00:52:04.401703007Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:04.410553 env[1445]: time="2025-09-13T00:52:04.410516705Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:04.455157 env[1445]: time="2025-09-13T00:52:04.455092694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:52:04.455157 env[1445]: time="2025-09-13T00:52:04.455129994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:52:04.455366 env[1445]: time="2025-09-13T00:52:04.455145894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:52:04.455439 env[1445]: time="2025-09-13T00:52:04.455390194Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e644c4bd3691bc4f5be1cfdb3ec57d1b466d7a37217c1d1c217b2afae6accc01 pid=2090 runtime=io.containerd.runc.v2 Sep 13 00:52:04.479642 systemd[1]: Started cri-containerd-e644c4bd3691bc4f5be1cfdb3ec57d1b466d7a37217c1d1c217b2afae6accc01.scope. Sep 13 00:52:04.491093 env[1445]: time="2025-09-13T00:52:04.491022585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:52:04.491261 env[1445]: time="2025-09-13T00:52:04.491236085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:52:04.491376 env[1445]: time="2025-09-13T00:52:04.491352985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:52:04.492135 env[1445]: time="2025-09-13T00:52:04.492077885Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2553a5f9049b907cc019e4fa99ef3c71e67cd2cb2045881390fa4a73bd96810 pid=2116 runtime=io.containerd.runc.v2 Sep 13 00:52:04.500676 env[1445]: time="2025-09-13T00:52:04.500625782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:52:04.500818 env[1445]: time="2025-09-13T00:52:04.500794982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:52:04.500925 env[1445]: time="2025-09-13T00:52:04.500904582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:52:04.501276 env[1445]: time="2025-09-13T00:52:04.501241582Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f35d5f011fe7873ae083efc3f31b55611a1a35845ab458a11ceeec5b163010a pid=2137 runtime=io.containerd.runc.v2 Sep 13 00:52:04.518382 systemd[1]: Started cri-containerd-f2553a5f9049b907cc019e4fa99ef3c71e67cd2cb2045881390fa4a73bd96810.scope. Sep 13 00:52:04.531635 systemd[1]: Started cri-containerd-3f35d5f011fe7873ae083efc3f31b55611a1a35845ab458a11ceeec5b163010a.scope. Sep 13 00:52:04.582836 env[1445]: time="2025-09-13T00:52:04.582787162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-2e01e92296,Uid:26d33cc1ae63ab54972857fdb58ddb2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e644c4bd3691bc4f5be1cfdb3ec57d1b466d7a37217c1d1c217b2afae6accc01\"" Sep 13 00:52:04.590811 env[1445]: time="2025-09-13T00:52:04.590709760Z" level=info msg="CreateContainer within sandbox \"e644c4bd3691bc4f5be1cfdb3ec57d1b466d7a37217c1d1c217b2afae6accc01\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:52:04.595931 env[1445]: time="2025-09-13T00:52:04.595892358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-2e01e92296,Uid:dce761224c0cc7506b3fbf3d1bffd98a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2553a5f9049b907cc019e4fa99ef3c71e67cd2cb2045881390fa4a73bd96810\"" Sep 13 00:52:04.603272 env[1445]: time="2025-09-13T00:52:04.603233056Z" level=info msg="CreateContainer within sandbox \"f2553a5f9049b907cc019e4fa99ef3c71e67cd2cb2045881390fa4a73bd96810\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:52:04.614202 env[1445]: time="2025-09-13T00:52:04.614025154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-2e01e92296,Uid:4ca11cde3b4815f140fe43ea1b398810,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f35d5f011fe7873ae083efc3f31b55611a1a35845ab458a11ceeec5b163010a\"" Sep 13 00:52:04.621432 env[1445]: time="2025-09-13T00:52:04.621395652Z" level=info msg="CreateContainer within sandbox \"3f35d5f011fe7873ae083efc3f31b55611a1a35845ab458a11ceeec5b163010a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:52:04.651978 env[1445]: time="2025-09-13T00:52:04.651934044Z" level=info msg="CreateContainer within sandbox \"f2553a5f9049b907cc019e4fa99ef3c71e67cd2cb2045881390fa4a73bd96810\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3323cba4cee0c177b21c3c8069d629329b89affd089d4b886fa44cc90ada3f4a\"" Sep 13 00:52:04.652690 env[1445]: time="2025-09-13T00:52:04.652661944Z" level=info msg="StartContainer for \"3323cba4cee0c177b21c3c8069d629329b89affd089d4b886fa44cc90ada3f4a\"" Sep 13 00:52:04.654950 env[1445]: time="2025-09-13T00:52:04.654917543Z" level=info msg="CreateContainer within sandbox \"e644c4bd3691bc4f5be1cfdb3ec57d1b466d7a37217c1d1c217b2afae6accc01\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c575433c4cd1287666dfd991bf8fd554372603e6e54982edd758f184e40016fe\"" Sep 13 00:52:04.655548 env[1445]: time="2025-09-13T00:52:04.655521943Z" level=info msg="StartContainer for \"c575433c4cd1287666dfd991bf8fd554372603e6e54982edd758f184e40016fe\"" Sep 13 00:52:04.676328 env[1445]: time="2025-09-13T00:52:04.675935538Z" level=info msg="CreateContainer within sandbox \"3f35d5f011fe7873ae083efc3f31b55611a1a35845ab458a11ceeec5b163010a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7393b38038217f6f8d33d0fc95115af1ea063d20aab7acef54a93d594886c5e9\"" Sep 13 00:52:04.676326 systemd[1]: Started cri-containerd-c575433c4cd1287666dfd991bf8fd554372603e6e54982edd758f184e40016fe.scope. Sep 13 00:52:04.683777 env[1445]: time="2025-09-13T00:52:04.683744836Z" level=info msg="StartContainer for \"7393b38038217f6f8d33d0fc95115af1ea063d20aab7acef54a93d594886c5e9\"" Sep 13 00:52:04.692623 systemd[1]: Started cri-containerd-3323cba4cee0c177b21c3c8069d629329b89affd089d4b886fa44cc90ada3f4a.scope. Sep 13 00:52:04.716371 systemd[1]: Started cri-containerd-7393b38038217f6f8d33d0fc95115af1ea063d20aab7acef54a93d594886c5e9.scope. Sep 13 00:52:04.781268 env[1445]: time="2025-09-13T00:52:04.781221311Z" level=info msg="StartContainer for \"7393b38038217f6f8d33d0fc95115af1ea063d20aab7acef54a93d594886c5e9\" returns successfully" Sep 13 00:52:04.801503 env[1445]: time="2025-09-13T00:52:04.801358706Z" level=info msg="StartContainer for \"3323cba4cee0c177b21c3c8069d629329b89affd089d4b886fa44cc90ada3f4a\" returns successfully" Sep 13 00:52:04.821317 env[1445]: time="2025-09-13T00:52:04.821262301Z" level=info msg="StartContainer for \"c575433c4cd1287666dfd991bf8fd554372603e6e54982edd758f184e40016fe\" returns successfully" Sep 13 00:52:05.492321 kubelet[2045]: E0913 00:52:05.492296 2045 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-2e01e92296\" not found" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:05.500885 kubelet[2045]: E0913 00:52:05.500650 2045 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-2e01e92296\" not found" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:05.505992 kubelet[2045]: E0913 00:52:05.505838 2045 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-2e01e92296\" not found" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:05.860205 kubelet[2045]: I0913 00:52:05.860177 2045 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:06.508615 kubelet[2045]: E0913 00:52:06.508580 2045 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-2e01e92296\" not found" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:06.509245 kubelet[2045]: E0913 00:52:06.509226 2045 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-2e01e92296\" not found" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:06.509786 kubelet[2045]: E0913 00:52:06.509764 2045 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-2e01e92296\" not found" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:08.175853 kubelet[2045]: E0913 00:52:08.175814 2045 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-2e01e92296\" not found" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:08.218984 kubelet[2045]: I0913 00:52:08.218950 2045 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:08.218984 kubelet[2045]: E0913 00:52:08.218991 2045 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-2e01e92296\": node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:52:08.235607 kubelet[2045]: E0913 00:52:08.235573 2045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:52:08.261232 kubelet[2045]: E0913 00:52:08.261140 2045 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.8-n-2e01e92296.1864b151c8f2c1c3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-2e01e92296,UID:ci-3510.3.8-n-2e01e92296,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-2e01e92296,},FirstTimestamp:2025-09-13 00:51:59.403155907 +0000 UTC m=+1.065761897,LastTimestamp:2025-09-13 00:51:59.403155907 +0000 UTC m=+1.065761897,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-2e01e92296,}" Sep 13 00:52:08.336603 kubelet[2045]: E0913 00:52:08.336565 2045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:52:08.437650 kubelet[2045]: E0913 00:52:08.437530 2045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:52:08.537783 kubelet[2045]: E0913 00:52:08.537732 2045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:52:08.638523 kubelet[2045]: E0913 00:52:08.638487 2045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:52:08.739113 kubelet[2045]: E0913 00:52:08.739005 2045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:52:08.839750 kubelet[2045]: E0913 00:52:08.839700 2045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:52:08.940501 kubelet[2045]: E0913 00:52:08.940459 2045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:52:09.041122 kubelet[2045]: E0913 00:52:09.041015 2045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:52:09.141764 kubelet[2045]: E0913 00:52:09.141724 2045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:52:09.242241 kubelet[2045]: E0913 00:52:09.242209 2045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:52:09.343040 kubelet[2045]: E0913 00:52:09.342997 2045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:52:09.443466 kubelet[2045]: E0913 00:52:09.443430 2045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:52:09.541496 kubelet[2045]: E0913 00:52:09.541455 2045 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:52:09.544530 kubelet[2045]: E0913 00:52:09.544499 2045 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-2e01e92296\" not found" Sep 13 00:52:09.625592 kubelet[2045]: I0913 00:52:09.625482 2045 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:09.637539 kubelet[2045]: I0913 00:52:09.637498 2045 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:52:09.637717 kubelet[2045]: I0913 00:52:09.637696 2045 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:09.644049 kubelet[2045]: I0913 00:52:09.644011 2045 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:52:09.644191 kubelet[2045]: I0913 00:52:09.644144 2045 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:09.649531 kubelet[2045]: I0913 00:52:09.649498 2045 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:52:10.288916 systemd[1]: Reloading. Sep 13 00:52:10.375103 /usr/lib/systemd/system-generators/torcx-generator[2352]: time="2025-09-13T00:52:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:52:10.375531 /usr/lib/systemd/system-generators/torcx-generator[2352]: time="2025-09-13T00:52:10Z" level=info msg="torcx already run" Sep 13 00:52:10.407360 kubelet[2045]: I0913 00:52:10.407326 2045 apiserver.go:52] "Watching apiserver" Sep 13 00:52:10.426407 kubelet[2045]: I0913 00:52:10.426339 2045 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:52:10.460296 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:52:10.460315 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:52:10.476409 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:52:10.576973 systemd[1]: Stopping kubelet.service... Sep 13 00:52:10.599435 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:52:10.599645 systemd[1]: Stopped kubelet.service. Sep 13 00:52:10.599706 systemd[1]: kubelet.service: Consumed 1.223s CPU time. Sep 13 00:52:10.601496 systemd[1]: Starting kubelet.service... Sep 13 00:52:10.693877 systemd[1]: Started kubelet.service. Sep 13 00:52:10.741495 kubelet[2419]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:52:10.741495 kubelet[2419]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:52:10.741495 kubelet[2419]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:52:10.741952 kubelet[2419]: I0913 00:52:10.741549 2419 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:52:10.746756 kubelet[2419]: I0913 00:52:10.746722 2419 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:52:10.746756 kubelet[2419]: I0913 00:52:10.746746 2419 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:52:10.747008 kubelet[2419]: I0913 00:52:10.746989 2419 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:52:10.747956 kubelet[2419]: I0913 00:52:10.747927 2419 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 00:52:10.749687 kubelet[2419]: I0913 00:52:10.749668 2419 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:52:10.755860 kubelet[2419]: E0913 00:52:10.755841 2419 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:52:10.755984 kubelet[2419]: I0913 00:52:10.755976 2419 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:52:10.759063 kubelet[2419]: I0913 00:52:10.759025 2419 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:52:10.759268 kubelet[2419]: I0913 00:52:10.759240 2419 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:52:10.759419 kubelet[2419]: I0913 00:52:10.759267 2419 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-2e01e92296","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:52:10.759556 kubelet[2419]: I0913 00:52:10.759425 2419 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:52:10.759556 kubelet[2419]: I0913 00:52:10.759437 2419 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:52:10.759556 kubelet[2419]: I0913 00:52:10.759486 2419 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:52:10.759675 kubelet[2419]: I0913 00:52:10.759642 2419 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:52:10.759675 kubelet[2419]: I0913 00:52:10.759656 2419 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:52:10.764070 kubelet[2419]: I0913 00:52:10.760026 2419 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:52:10.764194 kubelet[2419]: I0913 00:52:10.764184 2419 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:52:10.765565 kubelet[2419]: I0913 00:52:10.765533 2419 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:52:10.768901 kubelet[2419]: I0913 00:52:10.766221 2419 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:52:10.770297 kubelet[2419]: I0913 00:52:10.770284 2419 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:52:10.770445 kubelet[2419]: I0913 00:52:10.770437 2419 server.go:1289] "Started kubelet" Sep 13 00:52:10.777807 kubelet[2419]: I0913 00:52:10.776379 2419 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:52:10.777807 kubelet[2419]: I0913 00:52:10.776700 2419 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:52:10.777807 kubelet[2419]: I0913 00:52:10.776755 2419 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:52:10.777807 kubelet[2419]: I0913 00:52:10.777799 2419 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:52:10.780358 kubelet[2419]: I0913 00:52:10.780294 2419 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:52:10.782241 kubelet[2419]: I0913 00:52:10.782066 2419 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:52:10.785497 kubelet[2419]: I0913 00:52:10.784672 2419 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:52:10.785497 kubelet[2419]: I0913 00:52:10.784789 2419 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:52:10.785497 kubelet[2419]: I0913 00:52:10.784903 2419 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:52:10.787651 kubelet[2419]: I0913 00:52:10.787272 2419 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:52:10.787651 kubelet[2419]: I0913 00:52:10.787375 2419 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:52:10.787954 kubelet[2419]: E0913 00:52:10.787926 2419 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:52:10.790141 kubelet[2419]: I0913 00:52:10.790121 2419 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:52:10.800714 kubelet[2419]: I0913 00:52:10.800680 2419 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:52:10.802529 kubelet[2419]: I0913 00:52:10.802515 2419 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:52:10.802615 kubelet[2419]: I0913 00:52:10.802608 2419 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:52:10.802673 kubelet[2419]: I0913 00:52:10.802667 2419 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:52:10.802710 kubelet[2419]: I0913 00:52:10.802705 2419 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:52:10.802778 kubelet[2419]: E0913 00:52:10.802767 2419 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:52:10.840905 kubelet[2419]: I0913 00:52:10.840771 2419 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:52:10.840905 kubelet[2419]: I0913 00:52:10.840787 2419 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:52:10.840905 kubelet[2419]: I0913 00:52:10.840808 2419 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:52:10.842150 kubelet[2419]: I0913 00:52:10.842126 2419 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:52:10.842150 kubelet[2419]: I0913 00:52:10.842142 2419 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:52:10.842286 kubelet[2419]: I0913 00:52:10.842161 2419 policy_none.go:49] "None policy: Start" Sep 13 00:52:10.842286 kubelet[2419]: I0913 00:52:10.842173 2419 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:52:10.842286 kubelet[2419]: I0913 00:52:10.842184 2419 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:52:10.842399 kubelet[2419]: I0913 00:52:10.842322 2419 state_mem.go:75] "Updated machine memory state" Sep 13 00:52:10.845607 kubelet[2419]: E0913 00:52:10.845585 2419 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:52:10.845772 kubelet[2419]: I0913 00:52:10.845756 2419 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:52:10.845830 kubelet[2419]: I0913 00:52:10.845770 2419 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:52:10.846193 kubelet[2419]: I0913 00:52:10.846178 2419 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:52:10.848545 kubelet[2419]: E0913 00:52:10.847702 2419 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:52:10.904220 kubelet[2419]: I0913 00:52:10.904180 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.904481 kubelet[2419]: I0913 00:52:10.904449 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.904697 kubelet[2419]: I0913 00:52:10.904180 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.914737 kubelet[2419]: I0913 00:52:10.914712 2419 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:52:10.915026 kubelet[2419]: E0913 00:52:10.914991 2419 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-2e01e92296\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.915351 kubelet[2419]: I0913 00:52:10.915332 2419 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:52:10.915445 kubelet[2419]: E0913 00:52:10.915377 2419 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-2e01e92296\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.915445 kubelet[2419]: I0913 00:52:10.915337 2419 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:52:10.915445 kubelet[2419]: E0913 00:52:10.915439 2419 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-2e01e92296\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.955049 kubelet[2419]: I0913 00:52:10.955015 2419 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.965318 kubelet[2419]: I0913 00:52:10.965287 2419 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.965452 kubelet[2419]: I0913 00:52:10.965369 2419 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.985609 kubelet[2419]: I0913 00:52:10.985579 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26d33cc1ae63ab54972857fdb58ddb2e-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-2e01e92296\" (UID: \"26d33cc1ae63ab54972857fdb58ddb2e\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.985758 kubelet[2419]: I0913 00:52:10.985613 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dce761224c0cc7506b3fbf3d1bffd98a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-2e01e92296\" (UID: \"dce761224c0cc7506b3fbf3d1bffd98a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.985758 kubelet[2419]: I0913 00:52:10.985641 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dce761224c0cc7506b3fbf3d1bffd98a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-2e01e92296\" (UID: \"dce761224c0cc7506b3fbf3d1bffd98a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.985758 kubelet[2419]: I0913 00:52:10.985661 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dce761224c0cc7506b3fbf3d1bffd98a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-2e01e92296\" (UID: \"dce761224c0cc7506b3fbf3d1bffd98a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.985758 kubelet[2419]: I0913 00:52:10.985683 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dce761224c0cc7506b3fbf3d1bffd98a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-2e01e92296\" (UID: \"dce761224c0cc7506b3fbf3d1bffd98a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.985758 kubelet[2419]: I0913 00:52:10.985705 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dce761224c0cc7506b3fbf3d1bffd98a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-2e01e92296\" (UID: \"dce761224c0cc7506b3fbf3d1bffd98a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.985930 kubelet[2419]: I0913 00:52:10.985730 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26d33cc1ae63ab54972857fdb58ddb2e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-2e01e92296\" (UID: \"26d33cc1ae63ab54972857fdb58ddb2e\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.985930 kubelet[2419]: I0913 00:52:10.985753 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26d33cc1ae63ab54972857fdb58ddb2e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-2e01e92296\" (UID: \"26d33cc1ae63ab54972857fdb58ddb2e\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:10.985930 kubelet[2419]: I0913 00:52:10.985778 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ca11cde3b4815f140fe43ea1b398810-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-2e01e92296\" (UID: \"4ca11cde3b4815f140fe43ea1b398810\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:11.467716 sudo[2455]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:52:11.468056 sudo[2455]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 00:52:11.765762 kubelet[2419]: I0913 00:52:11.765667 2419 apiserver.go:52] "Watching apiserver" Sep 13 00:52:11.785365 kubelet[2419]: I0913 00:52:11.785329 2419 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:52:11.824893 kubelet[2419]: I0913 00:52:11.824860 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:11.825792 kubelet[2419]: I0913 00:52:11.825773 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:11.835071 kubelet[2419]: I0913 00:52:11.835050 2419 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:52:11.835270 kubelet[2419]: E0913 00:52:11.835255 2419 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-2e01e92296\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:11.836074 kubelet[2419]: I0913 00:52:11.836058 2419 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:52:11.836223 kubelet[2419]: E0913 00:52:11.836211 2419 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-2e01e92296\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-2e01e92296" Sep 13 00:52:11.864042 kubelet[2419]: I0913 00:52:11.863971 2419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-2e01e92296" podStartSLOduration=2.863952302 podStartE2EDuration="2.863952302s" podCreationTimestamp="2025-09-13 00:52:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:52:11.845713594 +0000 UTC m=+1.145013771" watchObservedRunningTime="2025-09-13 00:52:11.863952302 +0000 UTC m=+1.163252479" Sep 13 00:52:11.873986 kubelet[2419]: I0913 00:52:11.873927 2419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-2e01e92296" podStartSLOduration=2.87390888 podStartE2EDuration="2.87390888s" podCreationTimestamp="2025-09-13 00:52:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:52:11.864299312 +0000 UTC m=+1.163599589" watchObservedRunningTime="2025-09-13 00:52:11.87390888 +0000 UTC m=+1.173209157" Sep 13 00:52:11.874197 kubelet[2419]: I0913 00:52:11.874056 2419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-2e01e92296" podStartSLOduration=2.874049384 podStartE2EDuration="2.874049384s" podCreationTimestamp="2025-09-13 00:52:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:52:11.87353117 +0000 UTC m=+1.172831447" watchObservedRunningTime="2025-09-13 00:52:11.874049384 +0000 UTC m=+1.173349561" Sep 13 00:52:12.052685 sudo[2455]: pam_unix(sudo:session): session closed for user root Sep 13 00:52:13.760851 sudo[1720]: pam_unix(sudo:session): session closed for user root Sep 13 00:52:13.862632 sshd[1717]: pam_unix(sshd:session): session closed for user core Sep 13 00:52:13.865681 systemd[1]: sshd@4-10.200.4.42:22-10.200.16.10:43150.service: Deactivated successfully. Sep 13 00:52:13.866511 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:52:13.866690 systemd[1]: session-7.scope: Consumed 4.699s CPU time. Sep 13 00:52:13.867215 systemd-logind[1431]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:52:13.868016 systemd-logind[1431]: Removed session 7. Sep 13 00:52:17.126307 kubelet[2419]: I0913 00:52:17.126278 2419 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:52:17.126998 env[1445]: time="2025-09-13T00:52:17.126959464Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:52:17.127381 kubelet[2419]: I0913 00:52:17.127360 2419 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:52:18.136568 systemd[1]: Created slice kubepods-besteffort-pod04fe0a45_21e3_4807_9067_371ff78cb787.slice. Sep 13 00:52:18.148496 systemd[1]: Created slice kubepods-burstable-pode9350ce8_2347_4fe8_9a54_2d10a54f4348.slice. Sep 13 00:52:18.228541 kubelet[2419]: I0913 00:52:18.228502 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl85t\" (UniqueName: \"kubernetes.io/projected/e9350ce8-2347-4fe8-9a54-2d10a54f4348-kube-api-access-vl85t\") pod \"cilium-nw25l\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " pod="kube-system/cilium-nw25l" Sep 13 00:52:18.229092 kubelet[2419]: I0913 00:52:18.229064 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-hostproc\") pod \"cilium-nw25l\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " pod="kube-system/cilium-nw25l" Sep 13 00:52:18.229187 kubelet[2419]: I0913 00:52:18.229100 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9350ce8-2347-4fe8-9a54-2d10a54f4348-cilium-config-path\") pod \"cilium-nw25l\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " pod="kube-system/cilium-nw25l" Sep 13 00:52:18.229187 kubelet[2419]: I0913 00:52:18.229122 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-host-proc-sys-net\") pod \"cilium-nw25l\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " pod="kube-system/cilium-nw25l" Sep 13 00:52:18.229187 kubelet[2419]: I0913 00:52:18.229144 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-host-proc-sys-kernel\") pod \"cilium-nw25l\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " pod="kube-system/cilium-nw25l" Sep 13 00:52:18.229187 kubelet[2419]: I0913 00:52:18.229168 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04fe0a45-21e3-4807-9067-371ff78cb787-xtables-lock\") pod \"kube-proxy-zxwk9\" (UID: \"04fe0a45-21e3-4807-9067-371ff78cb787\") " pod="kube-system/kube-proxy-zxwk9" Sep 13 00:52:18.229343 kubelet[2419]: I0913 00:52:18.229189 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04fe0a45-21e3-4807-9067-371ff78cb787-lib-modules\") pod \"kube-proxy-zxwk9\" (UID: \"04fe0a45-21e3-4807-9067-371ff78cb787\") " pod="kube-system/kube-proxy-zxwk9" Sep 13 00:52:18.229343 kubelet[2419]: I0913 00:52:18.229209 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-cilium-run\") pod \"cilium-nw25l\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " pod="kube-system/cilium-nw25l" Sep 13 00:52:18.229343 kubelet[2419]: I0913 00:52:18.229231 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-cni-path\") pod \"cilium-nw25l\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " pod="kube-system/cilium-nw25l" Sep 13 00:52:18.229343 kubelet[2419]: I0913 00:52:18.229252 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-etc-cni-netd\") pod \"cilium-nw25l\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " pod="kube-system/cilium-nw25l" Sep 13 00:52:18.229343 kubelet[2419]: I0913 00:52:18.229283 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/04fe0a45-21e3-4807-9067-371ff78cb787-kube-proxy\") pod \"kube-proxy-zxwk9\" (UID: \"04fe0a45-21e3-4807-9067-371ff78cb787\") " pod="kube-system/kube-proxy-zxwk9" Sep 13 00:52:18.229343 kubelet[2419]: I0913 00:52:18.229306 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fmll\" (UniqueName: \"kubernetes.io/projected/04fe0a45-21e3-4807-9067-371ff78cb787-kube-api-access-6fmll\") pod \"kube-proxy-zxwk9\" (UID: \"04fe0a45-21e3-4807-9067-371ff78cb787\") " pod="kube-system/kube-proxy-zxwk9" Sep 13 00:52:18.229553 kubelet[2419]: I0913 00:52:18.229330 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-bpf-maps\") pod \"cilium-nw25l\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " pod="kube-system/cilium-nw25l" Sep 13 00:52:18.229553 kubelet[2419]: I0913 00:52:18.229352 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-cilium-cgroup\") pod \"cilium-nw25l\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " pod="kube-system/cilium-nw25l" Sep 13 00:52:18.229553 kubelet[2419]: I0913 00:52:18.229373 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-lib-modules\") pod \"cilium-nw25l\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " pod="kube-system/cilium-nw25l" Sep 13 00:52:18.229553 kubelet[2419]: I0913 00:52:18.229397 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-xtables-lock\") pod \"cilium-nw25l\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " pod="kube-system/cilium-nw25l" Sep 13 00:52:18.229553 kubelet[2419]: I0913 00:52:18.229418 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9350ce8-2347-4fe8-9a54-2d10a54f4348-clustermesh-secrets\") pod \"cilium-nw25l\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " pod="kube-system/cilium-nw25l" Sep 13 00:52:18.229553 kubelet[2419]: I0913 00:52:18.229441 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9350ce8-2347-4fe8-9a54-2d10a54f4348-hubble-tls\") pod \"cilium-nw25l\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " pod="kube-system/cilium-nw25l" Sep 13 00:52:18.334141 kubelet[2419]: I0913 00:52:18.334101 2419 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:52:18.394815 systemd[1]: Created slice kubepods-besteffort-pod8b9ee15b_6e16_481a_9c90_8dfb93741d9c.slice. Sep 13 00:52:18.431690 kubelet[2419]: I0913 00:52:18.431666 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtllp\" (UniqueName: \"kubernetes.io/projected/8b9ee15b-6e16-481a-9c90-8dfb93741d9c-kube-api-access-jtllp\") pod \"cilium-operator-6c4d7847fc-rl67h\" (UID: \"8b9ee15b-6e16-481a-9c90-8dfb93741d9c\") " pod="kube-system/cilium-operator-6c4d7847fc-rl67h" Sep 13 00:52:18.431909 kubelet[2419]: I0913 00:52:18.431881 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b9ee15b-6e16-481a-9c90-8dfb93741d9c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rl67h\" (UID: \"8b9ee15b-6e16-481a-9c90-8dfb93741d9c\") " pod="kube-system/cilium-operator-6c4d7847fc-rl67h" Sep 13 00:52:18.445428 env[1445]: time="2025-09-13T00:52:18.445384017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zxwk9,Uid:04fe0a45-21e3-4807-9067-371ff78cb787,Namespace:kube-system,Attempt:0,}" Sep 13 00:52:18.453219 env[1445]: time="2025-09-13T00:52:18.453176996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nw25l,Uid:e9350ce8-2347-4fe8-9a54-2d10a54f4348,Namespace:kube-system,Attempt:0,}" Sep 13 00:52:18.493940 env[1445]: time="2025-09-13T00:52:18.493869635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:52:18.493940 env[1445]: time="2025-09-13T00:52:18.493906736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:52:18.493940 env[1445]: time="2025-09-13T00:52:18.493921636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:52:18.495104 env[1445]: time="2025-09-13T00:52:18.494341846Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/236d507a05f0ef6e069e8e58ce1b03c6d6bb6056162fcb46bf6812bfced197e2 pid=2505 runtime=io.containerd.runc.v2 Sep 13 00:52:18.502563 env[1445]: time="2025-09-13T00:52:18.502500934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:52:18.502563 env[1445]: time="2025-09-13T00:52:18.502536435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:52:18.502724 env[1445]: time="2025-09-13T00:52:18.502551535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:52:18.503224 env[1445]: time="2025-09-13T00:52:18.502893443Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d pid=2524 runtime=io.containerd.runc.v2 Sep 13 00:52:18.515410 systemd[1]: Started cri-containerd-236d507a05f0ef6e069e8e58ce1b03c6d6bb6056162fcb46bf6812bfced197e2.scope. Sep 13 00:52:18.525816 systemd[1]: Started cri-containerd-32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d.scope. Sep 13 00:52:18.564685 env[1445]: time="2025-09-13T00:52:18.564640067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zxwk9,Uid:04fe0a45-21e3-4807-9067-371ff78cb787,Namespace:kube-system,Attempt:0,} returns sandbox id \"236d507a05f0ef6e069e8e58ce1b03c6d6bb6056162fcb46bf6812bfced197e2\"" Sep 13 00:52:18.573585 env[1445]: time="2025-09-13T00:52:18.573544072Z" level=info msg="CreateContainer within sandbox \"236d507a05f0ef6e069e8e58ce1b03c6d6bb6056162fcb46bf6812bfced197e2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:52:18.573728 env[1445]: time="2025-09-13T00:52:18.573708176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nw25l,Uid:e9350ce8-2347-4fe8-9a54-2d10a54f4348,Namespace:kube-system,Attempt:0,} returns sandbox id \"32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d\"" Sep 13 00:52:18.575475 env[1445]: time="2025-09-13T00:52:18.575446216Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:52:18.609335 env[1445]: time="2025-09-13T00:52:18.609301797Z" level=info msg="CreateContainer within sandbox \"236d507a05f0ef6e069e8e58ce1b03c6d6bb6056162fcb46bf6812bfced197e2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"66dc0fe238895c4719d6e870cf3e9dd40786a83afae3ce21d62f2bac2664f1a5\"" Sep 13 00:52:18.610319 env[1445]: time="2025-09-13T00:52:18.610269519Z" level=info msg="StartContainer for \"66dc0fe238895c4719d6e870cf3e9dd40786a83afae3ce21d62f2bac2664f1a5\"" Sep 13 00:52:18.627590 systemd[1]: Started cri-containerd-66dc0fe238895c4719d6e870cf3e9dd40786a83afae3ce21d62f2bac2664f1a5.scope. Sep 13 00:52:18.663384 env[1445]: time="2025-09-13T00:52:18.663277142Z" level=info msg="StartContainer for \"66dc0fe238895c4719d6e870cf3e9dd40786a83afae3ce21d62f2bac2664f1a5\" returns successfully" Sep 13 00:52:18.699167 env[1445]: time="2025-09-13T00:52:18.699126268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rl67h,Uid:8b9ee15b-6e16-481a-9c90-8dfb93741d9c,Namespace:kube-system,Attempt:0,}" Sep 13 00:52:18.727539 env[1445]: time="2025-09-13T00:52:18.727370620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:52:18.727539 env[1445]: time="2025-09-13T00:52:18.727432421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:52:18.727539 env[1445]: time="2025-09-13T00:52:18.727462422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:52:18.727796 env[1445]: time="2025-09-13T00:52:18.727604125Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f23953335f1e87f6bbaf95875f983bde8383b1eb975dc32f8adb26b1d5c33cbc pid=2629 runtime=io.containerd.runc.v2 Sep 13 00:52:18.746337 systemd[1]: Started cri-containerd-f23953335f1e87f6bbaf95875f983bde8383b1eb975dc32f8adb26b1d5c33cbc.scope. Sep 13 00:52:18.800412 env[1445]: time="2025-09-13T00:52:18.798944270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rl67h,Uid:8b9ee15b-6e16-481a-9c90-8dfb93741d9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f23953335f1e87f6bbaf95875f983bde8383b1eb975dc32f8adb26b1d5c33cbc\"" Sep 13 00:52:20.818739 kubelet[2419]: I0913 00:52:20.818296 2419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zxwk9" podStartSLOduration=2.818278259 podStartE2EDuration="2.818278259s" podCreationTimestamp="2025-09-13 00:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:52:18.848531914 +0000 UTC m=+8.147832091" watchObservedRunningTime="2025-09-13 00:52:20.818278259 +0000 UTC m=+10.117578436" Sep 13 00:52:25.787605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2691415671.mount: Deactivated successfully. Sep 13 00:52:28.700414 env[1445]: time="2025-09-13T00:52:28.700361103Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:28.710655 env[1445]: time="2025-09-13T00:52:28.710596885Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:28.714736 env[1445]: time="2025-09-13T00:52:28.714693758Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:28.715295 env[1445]: time="2025-09-13T00:52:28.715256268Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:52:28.716721 env[1445]: time="2025-09-13T00:52:28.716693394Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:52:28.723858 env[1445]: time="2025-09-13T00:52:28.723825921Z" level=info msg="CreateContainer within sandbox \"32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:52:28.747356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2487811995.mount: Deactivated successfully. Sep 13 00:52:28.763075 env[1445]: time="2025-09-13T00:52:28.763014518Z" level=info msg="CreateContainer within sandbox \"32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3\"" Sep 13 00:52:28.763837 env[1445]: time="2025-09-13T00:52:28.763806032Z" level=info msg="StartContainer for \"bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3\"" Sep 13 00:52:28.790911 systemd[1]: Started cri-containerd-bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3.scope. Sep 13 00:52:28.827500 systemd[1]: cri-containerd-bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3.scope: Deactivated successfully. Sep 13 00:52:28.921081 env[1445]: time="2025-09-13T00:52:28.920644021Z" level=info msg="StartContainer for \"bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3\" returns successfully" Sep 13 00:52:29.744547 systemd[1]: run-containerd-runc-k8s.io-bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3-runc.0dYLzH.mount: Deactivated successfully. Sep 13 00:52:29.744646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3-rootfs.mount: Deactivated successfully. Sep 13 00:52:32.635631 env[1445]: time="2025-09-13T00:52:32.635585431Z" level=info msg="shim disconnected" id=bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3 Sep 13 00:52:32.635631 env[1445]: time="2025-09-13T00:52:32.635632932Z" level=warning msg="cleaning up after shim disconnected" id=bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3 namespace=k8s.io Sep 13 00:52:32.636164 env[1445]: time="2025-09-13T00:52:32.635643732Z" level=info msg="cleaning up dead shim" Sep 13 00:52:32.643631 env[1445]: time="2025-09-13T00:52:32.643588560Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:52:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2839 runtime=io.containerd.runc.v2\n" Sep 13 00:52:32.942178 env[1445]: time="2025-09-13T00:52:32.942017365Z" level=info msg="CreateContainer within sandbox \"32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:52:32.972056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4243277231.mount: Deactivated successfully. Sep 13 00:52:32.990644 env[1445]: time="2025-09-13T00:52:32.990598147Z" level=info msg="CreateContainer within sandbox \"32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051\"" Sep 13 00:52:32.991519 env[1445]: time="2025-09-13T00:52:32.991477862Z" level=info msg="StartContainer for \"3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051\"" Sep 13 00:52:33.025506 systemd[1]: Started cri-containerd-3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051.scope. Sep 13 00:52:33.072133 env[1445]: time="2025-09-13T00:52:33.072090232Z" level=info msg="StartContainer for \"3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051\" returns successfully" Sep 13 00:52:33.085426 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:52:33.086168 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:52:33.086651 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:52:33.091749 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:52:33.100779 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:52:33.103632 systemd[1]: cri-containerd-3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051.scope: Deactivated successfully. Sep 13 00:52:33.274619 env[1445]: time="2025-09-13T00:52:33.274491012Z" level=info msg="shim disconnected" id=3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051 Sep 13 00:52:33.274619 env[1445]: time="2025-09-13T00:52:33.274538713Z" level=warning msg="cleaning up after shim disconnected" id=3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051 namespace=k8s.io Sep 13 00:52:33.274619 env[1445]: time="2025-09-13T00:52:33.274549413Z" level=info msg="cleaning up dead shim" Sep 13 00:52:33.283276 env[1445]: time="2025-09-13T00:52:33.283236550Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:52:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2903 runtime=io.containerd.runc.v2\n" Sep 13 00:52:33.956501 env[1445]: time="2025-09-13T00:52:33.956442927Z" level=info msg="CreateContainer within sandbox \"32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:52:33.961679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051-rootfs.mount: Deactivated successfully. Sep 13 00:52:34.001400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3041143247.mount: Deactivated successfully. Sep 13 00:52:34.015932 env[1445]: time="2025-09-13T00:52:34.015891456Z" level=info msg="CreateContainer within sandbox \"32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05\"" Sep 13 00:52:34.017369 env[1445]: time="2025-09-13T00:52:34.016447064Z" level=info msg="StartContainer for \"dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05\"" Sep 13 00:52:34.045931 systemd[1]: Started cri-containerd-dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05.scope. Sep 13 00:52:34.064019 env[1445]: time="2025-09-13T00:52:34.063976793Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:34.073390 env[1445]: time="2025-09-13T00:52:34.073345937Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:34.076848 env[1445]: time="2025-09-13T00:52:34.076805690Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:52:34.077137 env[1445]: time="2025-09-13T00:52:34.077103895Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:52:34.083761 systemd[1]: cri-containerd-dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05.scope: Deactivated successfully. Sep 13 00:52:34.087183 env[1445]: time="2025-09-13T00:52:34.085681926Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9350ce8_2347_4fe8_9a54_2d10a54f4348.slice/cri-containerd-dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05.scope/memory.events\": no such file or directory" Sep 13 00:52:34.089734 env[1445]: time="2025-09-13T00:52:34.089703488Z" level=info msg="CreateContainer within sandbox \"f23953335f1e87f6bbaf95875f983bde8383b1eb975dc32f8adb26b1d5c33cbc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:52:34.090510 env[1445]: time="2025-09-13T00:52:34.090474200Z" level=info msg="StartContainer for \"dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05\" returns successfully" Sep 13 00:52:34.566124 env[1445]: time="2025-09-13T00:52:34.566062893Z" level=info msg="shim disconnected" id=dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05 Sep 13 00:52:34.566124 env[1445]: time="2025-09-13T00:52:34.566115194Z" level=warning msg="cleaning up after shim disconnected" id=dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05 namespace=k8s.io Sep 13 00:52:34.566124 env[1445]: time="2025-09-13T00:52:34.566128394Z" level=info msg="cleaning up dead shim" Sep 13 00:52:34.569985 env[1445]: time="2025-09-13T00:52:34.569938153Z" level=info msg="CreateContainer within sandbox \"f23953335f1e87f6bbaf95875f983bde8383b1eb975dc32f8adb26b1d5c33cbc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a\"" Sep 13 00:52:34.571181 env[1445]: time="2025-09-13T00:52:34.570531162Z" level=info msg="StartContainer for \"b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a\"" Sep 13 00:52:34.578888 env[1445]: time="2025-09-13T00:52:34.578855190Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:52:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2962 runtime=io.containerd.runc.v2\n" Sep 13 00:52:34.590926 systemd[1]: Started cri-containerd-b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a.scope. Sep 13 00:52:34.622860 env[1445]: time="2025-09-13T00:52:34.622811564Z" level=info msg="StartContainer for \"b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a\" returns successfully" Sep 13 00:52:34.949010 env[1445]: time="2025-09-13T00:52:34.948912565Z" level=info msg="CreateContainer within sandbox \"32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:52:34.966186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05-rootfs.mount: Deactivated successfully. Sep 13 00:52:34.978539 env[1445]: time="2025-09-13T00:52:34.978494518Z" level=info msg="CreateContainer within sandbox \"32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3\"" Sep 13 00:52:34.979538 env[1445]: time="2025-09-13T00:52:34.979509134Z" level=info msg="StartContainer for \"d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3\"" Sep 13 00:52:34.999856 kubelet[2419]: I0913 00:52:34.999543 2419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rl67h" podStartSLOduration=1.721253726 podStartE2EDuration="16.999523941s" podCreationTimestamp="2025-09-13 00:52:18 +0000 UTC" firstStartedPulling="2025-09-13 00:52:18.800460705 +0000 UTC m=+8.099760882" lastFinishedPulling="2025-09-13 00:52:34.07873082 +0000 UTC m=+23.378031097" observedRunningTime="2025-09-13 00:52:34.999342438 +0000 UTC m=+24.298642615" watchObservedRunningTime="2025-09-13 00:52:34.999523941 +0000 UTC m=+24.298824118" Sep 13 00:52:35.014238 systemd[1]: Started cri-containerd-d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3.scope. Sep 13 00:52:35.095022 env[1445]: time="2025-09-13T00:52:35.094977771Z" level=info msg="StartContainer for \"d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3\" returns successfully" Sep 13 00:52:35.103523 systemd[1]: cri-containerd-d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3.scope: Deactivated successfully. Sep 13 00:52:35.133564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3-rootfs.mount: Deactivated successfully. Sep 13 00:52:35.149024 env[1445]: time="2025-09-13T00:52:35.148976379Z" level=info msg="shim disconnected" id=d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3 Sep 13 00:52:35.149352 env[1445]: time="2025-09-13T00:52:35.149332284Z" level=warning msg="cleaning up after shim disconnected" id=d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3 namespace=k8s.io Sep 13 00:52:35.149430 env[1445]: time="2025-09-13T00:52:35.149416986Z" level=info msg="cleaning up dead shim" Sep 13 00:52:35.162804 env[1445]: time="2025-09-13T00:52:35.162760285Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:52:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3055 runtime=io.containerd.runc.v2\n" Sep 13 00:52:35.952503 env[1445]: time="2025-09-13T00:52:35.952460808Z" level=info msg="CreateContainer within sandbox \"32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:52:35.985149 env[1445]: time="2025-09-13T00:52:35.985107397Z" level=info msg="CreateContainer within sandbox \"32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978\"" Sep 13 00:52:35.986964 env[1445]: time="2025-09-13T00:52:35.985874208Z" level=info msg="StartContainer for \"2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978\"" Sep 13 00:52:36.010878 systemd[1]: Started cri-containerd-2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978.scope. Sep 13 00:52:36.050916 env[1445]: time="2025-09-13T00:52:36.050862364Z" level=info msg="StartContainer for \"2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978\" returns successfully" Sep 13 00:52:36.285443 kubelet[2419]: I0913 00:52:36.284287 2419 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:52:36.340185 systemd[1]: Created slice kubepods-burstable-podb027e250_96dd_4349_82d0_3f6a747b1165.slice. Sep 13 00:52:36.348293 systemd[1]: Created slice kubepods-burstable-pod17d7732c_2af2_44cf_aef4_242b3d061be1.slice. Sep 13 00:52:36.441869 kubelet[2419]: I0913 00:52:36.441822 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwfwp\" (UniqueName: \"kubernetes.io/projected/b027e250-96dd-4349-82d0-3f6a747b1165-kube-api-access-lwfwp\") pod \"coredns-674b8bbfcf-6ksvh\" (UID: \"b027e250-96dd-4349-82d0-3f6a747b1165\") " pod="kube-system/coredns-674b8bbfcf-6ksvh" Sep 13 00:52:36.442062 kubelet[2419]: I0913 00:52:36.441882 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc62k\" (UniqueName: \"kubernetes.io/projected/17d7732c-2af2-44cf-aef4-242b3d061be1-kube-api-access-fc62k\") pod \"coredns-674b8bbfcf-4wrkh\" (UID: \"17d7732c-2af2-44cf-aef4-242b3d061be1\") " pod="kube-system/coredns-674b8bbfcf-4wrkh" Sep 13 00:52:36.442062 kubelet[2419]: I0913 00:52:36.441908 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b027e250-96dd-4349-82d0-3f6a747b1165-config-volume\") pod \"coredns-674b8bbfcf-6ksvh\" (UID: \"b027e250-96dd-4349-82d0-3f6a747b1165\") " pod="kube-system/coredns-674b8bbfcf-6ksvh" Sep 13 00:52:36.442062 kubelet[2419]: I0913 00:52:36.441948 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17d7732c-2af2-44cf-aef4-242b3d061be1-config-volume\") pod \"coredns-674b8bbfcf-4wrkh\" (UID: \"17d7732c-2af2-44cf-aef4-242b3d061be1\") " pod="kube-system/coredns-674b8bbfcf-4wrkh" Sep 13 00:52:36.644736 env[1445]: time="2025-09-13T00:52:36.644686544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6ksvh,Uid:b027e250-96dd-4349-82d0-3f6a747b1165,Namespace:kube-system,Attempt:0,}" Sep 13 00:52:36.652883 env[1445]: time="2025-09-13T00:52:36.652840964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4wrkh,Uid:17d7732c-2af2-44cf-aef4-242b3d061be1,Namespace:kube-system,Attempt:0,}" Sep 13 00:52:38.712053 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:52:38.714750 systemd-networkd[1608]: cilium_host: Link UP Sep 13 00:52:38.715637 systemd-networkd[1608]: cilium_net: Link UP Sep 13 00:52:38.715735 systemd-networkd[1608]: cilium_net: Gained carrier Sep 13 00:52:38.716455 systemd-networkd[1608]: cilium_host: Gained carrier Sep 13 00:52:38.717209 systemd-networkd[1608]: cilium_host: Gained IPv6LL Sep 13 00:52:38.812189 systemd-networkd[1608]: cilium_net: Gained IPv6LL Sep 13 00:52:38.957095 systemd-networkd[1608]: cilium_vxlan: Link UP Sep 13 00:52:38.957108 systemd-networkd[1608]: cilium_vxlan: Gained carrier Sep 13 00:52:39.284054 kernel: NET: Registered PF_ALG protocol family Sep 13 00:52:40.224891 systemd-networkd[1608]: lxc_health: Link UP Sep 13 00:52:40.250562 systemd-networkd[1608]: lxc_health: Gained carrier Sep 13 00:52:40.251102 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:52:40.486179 kubelet[2419]: I0913 00:52:40.486022 2419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nw25l" podStartSLOduration=12.344605172 podStartE2EDuration="22.486001454s" podCreationTimestamp="2025-09-13 00:52:18 +0000 UTC" firstStartedPulling="2025-09-13 00:52:18.575109008 +0000 UTC m=+7.874409285" lastFinishedPulling="2025-09-13 00:52:28.71650529 +0000 UTC m=+18.015805567" observedRunningTime="2025-09-13 00:52:36.972686639 +0000 UTC m=+26.271986916" watchObservedRunningTime="2025-09-13 00:52:40.486001454 +0000 UTC m=+29.785301631" Sep 13 00:52:40.736738 systemd-networkd[1608]: lxcaf8363326f80: Link UP Sep 13 00:52:40.751051 kernel: eth0: renamed from tmp63631 Sep 13 00:52:40.756495 systemd-networkd[1608]: lxcaf8363326f80: Gained carrier Sep 13 00:52:40.759232 systemd-networkd[1608]: lxcbb1d7ab812a5: Link UP Sep 13 00:52:40.775898 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcaf8363326f80: link becomes ready Sep 13 00:52:40.776012 kernel: eth0: renamed from tmp3df1d Sep 13 00:52:40.786122 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbb1d7ab812a5: link becomes ready Sep 13 00:52:40.786405 systemd-networkd[1608]: lxcbb1d7ab812a5: Gained carrier Sep 13 00:52:40.805538 systemd-networkd[1608]: cilium_vxlan: Gained IPv6LL Sep 13 00:52:42.084200 systemd-networkd[1608]: lxc_health: Gained IPv6LL Sep 13 00:52:42.340208 systemd-networkd[1608]: lxcaf8363326f80: Gained IPv6LL Sep 13 00:52:42.724177 systemd-networkd[1608]: lxcbb1d7ab812a5: Gained IPv6LL Sep 13 00:52:44.393707 env[1445]: time="2025-09-13T00:52:44.393644647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:52:44.394186 env[1445]: time="2025-09-13T00:52:44.394153353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:52:44.394290 env[1445]: time="2025-09-13T00:52:44.394269155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:52:44.394540 env[1445]: time="2025-09-13T00:52:44.394499458Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/636318027d32c857606caa4a4627bb1d248078b6f5d98b0cb88c23543e32eaff pid=3607 runtime=io.containerd.runc.v2 Sep 13 00:52:44.398988 env[1445]: time="2025-09-13T00:52:44.398934712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:52:44.399172 env[1445]: time="2025-09-13T00:52:44.399144414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:52:44.399378 env[1445]: time="2025-09-13T00:52:44.399351417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:52:44.399826 env[1445]: time="2025-09-13T00:52:44.399782822Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3df1d12444b25a84123651bc345f3051ccb6a61f2354beaa1c5e6456a2a9da10 pid=3616 runtime=io.containerd.runc.v2 Sep 13 00:52:44.427104 systemd[1]: Started cri-containerd-636318027d32c857606caa4a4627bb1d248078b6f5d98b0cb88c23543e32eaff.scope. Sep 13 00:52:44.442401 systemd[1]: run-containerd-runc-k8s.io-636318027d32c857606caa4a4627bb1d248078b6f5d98b0cb88c23543e32eaff-runc.EEqWHK.mount: Deactivated successfully. Sep 13 00:52:44.447792 systemd[1]: Started cri-containerd-3df1d12444b25a84123651bc345f3051ccb6a61f2354beaa1c5e6456a2a9da10.scope. Sep 13 00:52:44.540589 env[1445]: time="2025-09-13T00:52:44.540480633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4wrkh,Uid:17d7732c-2af2-44cf-aef4-242b3d061be1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3df1d12444b25a84123651bc345f3051ccb6a61f2354beaa1c5e6456a2a9da10\"" Sep 13 00:52:44.549075 env[1445]: time="2025-09-13T00:52:44.547500318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6ksvh,Uid:b027e250-96dd-4349-82d0-3f6a747b1165,Namespace:kube-system,Attempt:0,} returns sandbox id \"636318027d32c857606caa4a4627bb1d248078b6f5d98b0cb88c23543e32eaff\"" Sep 13 00:52:44.552750 env[1445]: time="2025-09-13T00:52:44.552706082Z" level=info msg="CreateContainer within sandbox \"3df1d12444b25a84123651bc345f3051ccb6a61f2354beaa1c5e6456a2a9da10\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:52:44.559304 env[1445]: time="2025-09-13T00:52:44.559262161Z" level=info msg="CreateContainer within sandbox \"636318027d32c857606caa4a4627bb1d248078b6f5d98b0cb88c23543e32eaff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:52:44.598974 env[1445]: time="2025-09-13T00:52:44.598918444Z" level=info msg="CreateContainer within sandbox \"3df1d12444b25a84123651bc345f3051ccb6a61f2354beaa1c5e6456a2a9da10\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc5057c0538f8a7911d53d57b7c46b967ae6e6c1d3510854fbd26ddefe90cba2\"" Sep 13 00:52:44.600821 env[1445]: time="2025-09-13T00:52:44.600778966Z" level=info msg="StartContainer for \"fc5057c0538f8a7911d53d57b7c46b967ae6e6c1d3510854fbd26ddefe90cba2\"" Sep 13 00:52:44.616263 env[1445]: time="2025-09-13T00:52:44.616211354Z" level=info msg="CreateContainer within sandbox \"636318027d32c857606caa4a4627bb1d248078b6f5d98b0cb88c23543e32eaff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6d90baa6da8eb8c8cc4533fdc0e1cee649caf63c2b0af3105c274e6120441b8\"" Sep 13 00:52:44.617234 env[1445]: time="2025-09-13T00:52:44.617196766Z" level=info msg="StartContainer for \"b6d90baa6da8eb8c8cc4533fdc0e1cee649caf63c2b0af3105c274e6120441b8\"" Sep 13 00:52:44.627993 systemd[1]: Started cri-containerd-fc5057c0538f8a7911d53d57b7c46b967ae6e6c1d3510854fbd26ddefe90cba2.scope. Sep 13 00:52:44.657395 systemd[1]: Started cri-containerd-b6d90baa6da8eb8c8cc4533fdc0e1cee649caf63c2b0af3105c274e6120441b8.scope. Sep 13 00:52:44.697203 env[1445]: time="2025-09-13T00:52:44.697158938Z" level=info msg="StartContainer for \"fc5057c0538f8a7911d53d57b7c46b967ae6e6c1d3510854fbd26ddefe90cba2\" returns successfully" Sep 13 00:52:44.713391 env[1445]: time="2025-09-13T00:52:44.713341735Z" level=info msg="StartContainer for \"b6d90baa6da8eb8c8cc4533fdc0e1cee649caf63c2b0af3105c274e6120441b8\" returns successfully" Sep 13 00:52:44.996204 kubelet[2419]: I0913 00:52:44.996083 2419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4wrkh" podStartSLOduration=26.996066073 podStartE2EDuration="26.996066073s" podCreationTimestamp="2025-09-13 00:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:52:44.980083779 +0000 UTC m=+34.279384056" watchObservedRunningTime="2025-09-13 00:52:44.996066073 +0000 UTC m=+34.295366350" Sep 13 00:52:45.405746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622296031.mount: Deactivated successfully. Sep 13 00:54:17.946387 systemd[1]: Started sshd@5-10.200.4.42:22-10.200.16.10:56730.service. Sep 13 00:54:18.528853 sshd[3768]: Accepted publickey for core from 10.200.16.10 port 56730 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:54:18.530218 sshd[3768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:18.535040 systemd[1]: Started session-8.scope. Sep 13 00:54:18.535588 systemd-logind[1431]: New session 8 of user core. Sep 13 00:54:19.019317 sshd[3768]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:19.021822 systemd[1]: sshd@5-10.200.4.42:22-10.200.16.10:56730.service: Deactivated successfully. Sep 13 00:54:19.022754 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:54:19.023438 systemd-logind[1431]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:54:19.024233 systemd-logind[1431]: Removed session 8. Sep 13 00:54:24.126331 systemd[1]: Started sshd@6-10.200.4.42:22-10.200.16.10:57996.service. Sep 13 00:54:24.716160 sshd[3783]: Accepted publickey for core from 10.200.16.10 port 57996 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:54:24.717621 sshd[3783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:24.722426 systemd[1]: Started session-9.scope. Sep 13 00:54:24.722627 systemd-logind[1431]: New session 9 of user core. Sep 13 00:54:25.196927 sshd[3783]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:25.199496 systemd[1]: sshd@6-10.200.4.42:22-10.200.16.10:57996.service: Deactivated successfully. Sep 13 00:54:25.200370 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:54:25.200990 systemd-logind[1431]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:54:25.201805 systemd-logind[1431]: Removed session 9. Sep 13 00:54:30.296418 systemd[1]: Started sshd@7-10.200.4.42:22-10.200.16.10:55078.service. Sep 13 00:54:30.885060 sshd[3796]: Accepted publickey for core from 10.200.16.10 port 55078 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:54:30.886146 sshd[3796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:30.891088 systemd[1]: Started session-10.scope. Sep 13 00:54:30.891497 systemd-logind[1431]: New session 10 of user core. Sep 13 00:54:31.369794 sshd[3796]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:31.372436 systemd-logind[1431]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:54:31.372655 systemd[1]: sshd@7-10.200.4.42:22-10.200.16.10:55078.service: Deactivated successfully. Sep 13 00:54:31.373459 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:54:31.374494 systemd-logind[1431]: Removed session 10. Sep 13 00:54:36.470353 systemd[1]: Started sshd@8-10.200.4.42:22-10.200.16.10:55094.service. Sep 13 00:54:37.059461 sshd[3809]: Accepted publickey for core from 10.200.16.10 port 55094 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:54:37.060736 sshd[3809]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:37.065440 systemd[1]: Started session-11.scope. Sep 13 00:54:37.065842 systemd-logind[1431]: New session 11 of user core. Sep 13 00:54:37.541268 sshd[3809]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:37.544465 systemd[1]: sshd@8-10.200.4.42:22-10.200.16.10:55094.service: Deactivated successfully. Sep 13 00:54:37.545336 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:54:37.545988 systemd-logind[1431]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:54:37.546757 systemd-logind[1431]: Removed session 11. Sep 13 00:54:42.641599 systemd[1]: Started sshd@9-10.200.4.42:22-10.200.16.10:53414.service. Sep 13 00:54:43.231688 sshd[3823]: Accepted publickey for core from 10.200.16.10 port 53414 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:54:43.232927 sshd[3823]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:43.237579 systemd[1]: Started session-12.scope. Sep 13 00:54:43.237993 systemd-logind[1431]: New session 12 of user core. Sep 13 00:54:43.713342 sshd[3823]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:43.715993 systemd[1]: sshd@9-10.200.4.42:22-10.200.16.10:53414.service: Deactivated successfully. Sep 13 00:54:43.716866 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:54:43.717542 systemd-logind[1431]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:54:43.718373 systemd-logind[1431]: Removed session 12. Sep 13 00:54:43.811946 systemd[1]: Started sshd@10-10.200.4.42:22-10.200.16.10:53420.service. Sep 13 00:54:44.401473 sshd[3835]: Accepted publickey for core from 10.200.16.10 port 53420 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:54:44.402952 sshd[3835]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:44.407803 systemd[1]: Started session-13.scope. Sep 13 00:54:44.408257 systemd-logind[1431]: New session 13 of user core. Sep 13 00:54:44.940513 sshd[3835]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:44.943224 systemd[1]: sshd@10-10.200.4.42:22-10.200.16.10:53420.service: Deactivated successfully. Sep 13 00:54:44.944048 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:54:44.944669 systemd-logind[1431]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:54:44.945473 systemd-logind[1431]: Removed session 13. Sep 13 00:54:45.041525 systemd[1]: Started sshd@11-10.200.4.42:22-10.200.16.10:53432.service. Sep 13 00:54:45.623526 sshd[3845]: Accepted publickey for core from 10.200.16.10 port 53432 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:54:45.624757 sshd[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:45.629325 systemd[1]: Started session-14.scope. Sep 13 00:54:45.629654 systemd-logind[1431]: New session 14 of user core. Sep 13 00:54:46.134435 sshd[3845]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:46.137054 systemd[1]: sshd@11-10.200.4.42:22-10.200.16.10:53432.service: Deactivated successfully. Sep 13 00:54:46.137860 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:54:46.138527 systemd-logind[1431]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:54:46.139304 systemd-logind[1431]: Removed session 14. Sep 13 00:54:46.878043 update_engine[1432]: I0913 00:54:46.878005 1432 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 00:54:46.878448 update_engine[1432]: I0913 00:54:46.878058 1432 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 00:54:46.878448 update_engine[1432]: I0913 00:54:46.878201 1432 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 00:54:46.878702 update_engine[1432]: I0913 00:54:46.878677 1432 omaha_request_params.cc:62] Current group set to lts Sep 13 00:54:46.879045 update_engine[1432]: I0913 00:54:46.878837 1432 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 00:54:46.879045 update_engine[1432]: I0913 00:54:46.878847 1432 update_attempter.cc:643] Scheduling an action processor start. Sep 13 00:54:46.879045 update_engine[1432]: I0913 00:54:46.878865 1432 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:54:46.879045 update_engine[1432]: I0913 00:54:46.878892 1432 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 00:54:46.879045 update_engine[1432]: I0913 00:54:46.878955 1432 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 13 00:54:46.879045 update_engine[1432]: I0913 00:54:46.878960 1432 omaha_request_action.cc:271] Request: Sep 13 00:54:46.879045 update_engine[1432]: Sep 13 00:54:46.879045 update_engine[1432]: Sep 13 00:54:46.879045 update_engine[1432]: Sep 13 00:54:46.879045 update_engine[1432]: Sep 13 00:54:46.879045 update_engine[1432]: Sep 13 00:54:46.879045 update_engine[1432]: Sep 13 00:54:46.879045 update_engine[1432]: Sep 13 00:54:46.879045 update_engine[1432]: Sep 13 00:54:46.879045 update_engine[1432]: I0913 00:54:46.878968 1432 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:54:46.879489 locksmithd[1522]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 00:54:46.949994 update_engine[1432]: I0913 00:54:46.949957 1432 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:54:46.950219 update_engine[1432]: I0913 00:54:46.950199 1432 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:54:46.961717 update_engine[1432]: E0913 00:54:46.961689 1432 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:54:46.961830 update_engine[1432]: I0913 00:54:46.961799 1432 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 00:54:51.233756 systemd[1]: Started sshd@12-10.200.4.42:22-10.200.16.10:34886.service. Sep 13 00:54:51.815969 sshd[3859]: Accepted publickey for core from 10.200.16.10 port 34886 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:54:51.817323 sshd[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:51.822076 systemd[1]: Started session-15.scope. Sep 13 00:54:51.822485 systemd-logind[1431]: New session 15 of user core. Sep 13 00:54:52.290671 sshd[3859]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:52.293512 systemd[1]: sshd@12-10.200.4.42:22-10.200.16.10:34886.service: Deactivated successfully. Sep 13 00:54:52.294372 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:54:52.295022 systemd-logind[1431]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:54:52.295786 systemd-logind[1431]: Removed session 15. Sep 13 00:54:56.881765 update_engine[1432]: I0913 00:54:56.881675 1432 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:54:56.882657 update_engine[1432]: I0913 00:54:56.882386 1432 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:54:56.882717 update_engine[1432]: I0913 00:54:56.882659 1432 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:54:56.898784 update_engine[1432]: E0913 00:54:56.898756 1432 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:54:56.898911 update_engine[1432]: I0913 00:54:56.898865 1432 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 13 00:54:57.385000 systemd[1]: Started sshd@13-10.200.4.42:22-10.200.16.10:34890.service. Sep 13 00:54:57.967276 sshd[3870]: Accepted publickey for core from 10.200.16.10 port 34890 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:54:57.968581 sshd[3870]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:57.973176 systemd[1]: Started session-16.scope. Sep 13 00:54:57.973728 systemd-logind[1431]: New session 16 of user core. Sep 13 00:54:58.449292 sshd[3870]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:58.451806 systemd[1]: sshd@13-10.200.4.42:22-10.200.16.10:34890.service: Deactivated successfully. Sep 13 00:54:58.452696 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:54:58.453336 systemd-logind[1431]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:54:58.454175 systemd-logind[1431]: Removed session 16. Sep 13 00:54:58.555388 systemd[1]: Started sshd@14-10.200.4.42:22-10.200.16.10:34892.service. Sep 13 00:54:59.140208 sshd[3882]: Accepted publickey for core from 10.200.16.10 port 34892 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:54:59.141733 sshd[3882]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:59.146818 systemd[1]: Started session-17.scope. Sep 13 00:54:59.147437 systemd-logind[1431]: New session 17 of user core. Sep 13 00:54:59.642248 sshd[3882]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:59.644775 systemd[1]: sshd@14-10.200.4.42:22-10.200.16.10:34892.service: Deactivated successfully. Sep 13 00:54:59.645634 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:54:59.646288 systemd-logind[1431]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:54:59.646979 systemd-logind[1431]: Removed session 17. Sep 13 00:54:59.740662 systemd[1]: Started sshd@15-10.200.4.42:22-10.200.16.10:34894.service. Sep 13 00:55:00.325447 sshd[3891]: Accepted publickey for core from 10.200.16.10 port 34894 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:55:00.326734 sshd[3891]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:00.331152 systemd-logind[1431]: New session 18 of user core. Sep 13 00:55:00.331476 systemd[1]: Started session-18.scope. Sep 13 00:55:01.332632 sshd[3891]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:01.335121 systemd[1]: sshd@15-10.200.4.42:22-10.200.16.10:34894.service: Deactivated successfully. Sep 13 00:55:01.336243 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:55:01.336267 systemd-logind[1431]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:55:01.337276 systemd-logind[1431]: Removed session 18. Sep 13 00:55:01.431085 systemd[1]: Started sshd@16-10.200.4.42:22-10.200.16.10:52720.service. Sep 13 00:55:02.016409 sshd[3908]: Accepted publickey for core from 10.200.16.10 port 52720 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:55:02.019088 sshd[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:02.023753 systemd[1]: Started session-19.scope. Sep 13 00:55:02.024229 systemd-logind[1431]: New session 19 of user core. Sep 13 00:55:02.593307 sshd[3908]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:02.595826 systemd[1]: sshd@16-10.200.4.42:22-10.200.16.10:52720.service: Deactivated successfully. Sep 13 00:55:02.597008 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:55:02.597064 systemd-logind[1431]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:55:02.598081 systemd-logind[1431]: Removed session 19. Sep 13 00:55:02.694646 systemd[1]: Started sshd@17-10.200.4.42:22-10.200.16.10:52726.service. Sep 13 00:55:03.286946 sshd[3917]: Accepted publickey for core from 10.200.16.10 port 52726 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:55:03.288244 sshd[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:03.292882 systemd[1]: Started session-20.scope. Sep 13 00:55:03.293444 systemd-logind[1431]: New session 20 of user core. Sep 13 00:55:03.761712 sshd[3917]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:03.764723 systemd[1]: sshd@17-10.200.4.42:22-10.200.16.10:52726.service: Deactivated successfully. Sep 13 00:55:03.765702 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:55:03.766428 systemd-logind[1431]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:55:03.767409 systemd-logind[1431]: Removed session 20. Sep 13 00:55:06.881296 update_engine[1432]: I0913 00:55:06.881186 1432 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:55:06.881820 update_engine[1432]: I0913 00:55:06.881635 1432 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:55:06.881820 update_engine[1432]: I0913 00:55:06.881817 1432 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:55:06.890074 update_engine[1432]: E0913 00:55:06.890044 1432 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:55:06.890174 update_engine[1432]: I0913 00:55:06.890136 1432 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 13 00:55:08.861199 systemd[1]: Started sshd@18-10.200.4.42:22-10.200.16.10:52728.service. Sep 13 00:55:09.452687 sshd[3932]: Accepted publickey for core from 10.200.16.10 port 52728 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:55:09.454003 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:09.458772 systemd[1]: Started session-21.scope. Sep 13 00:55:09.459332 systemd-logind[1431]: New session 21 of user core. Sep 13 00:55:09.930913 sshd[3932]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:09.933441 systemd[1]: sshd@18-10.200.4.42:22-10.200.16.10:52728.service: Deactivated successfully. Sep 13 00:55:09.934433 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:55:09.935190 systemd-logind[1431]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:55:09.935937 systemd-logind[1431]: Removed session 21. Sep 13 00:55:15.041875 systemd[1]: Started sshd@19-10.200.4.42:22-10.200.16.10:46144.service. Sep 13 00:55:15.630091 sshd[3946]: Accepted publickey for core from 10.200.16.10 port 46144 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:55:15.631351 sshd[3946]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:15.635934 systemd-logind[1431]: New session 22 of user core. Sep 13 00:55:15.636577 systemd[1]: Started session-22.scope. Sep 13 00:55:16.105885 sshd[3946]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:16.109024 systemd[1]: sshd@19-10.200.4.42:22-10.200.16.10:46144.service: Deactivated successfully. Sep 13 00:55:16.109792 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:55:16.110235 systemd-logind[1431]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:55:16.110979 systemd-logind[1431]: Removed session 22. Sep 13 00:55:16.204867 systemd[1]: Started sshd@20-10.200.4.42:22-10.200.16.10:46156.service. Sep 13 00:55:16.793364 sshd[3958]: Accepted publickey for core from 10.200.16.10 port 46156 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:55:16.794643 sshd[3958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:16.799385 systemd[1]: Started session-23.scope. Sep 13 00:55:16.799920 systemd-logind[1431]: New session 23 of user core. Sep 13 00:55:16.877300 update_engine[1432]: I0913 00:55:16.877259 1432 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:55:16.877693 update_engine[1432]: I0913 00:55:16.877517 1432 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:55:16.877738 update_engine[1432]: I0913 00:55:16.877712 1432 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:55:16.900804 update_engine[1432]: E0913 00:55:16.900770 1432 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:55:16.900964 update_engine[1432]: I0913 00:55:16.900875 1432 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:55:16.900964 update_engine[1432]: I0913 00:55:16.900883 1432 omaha_request_action.cc:621] Omaha request response: Sep 13 00:55:16.901069 update_engine[1432]: E0913 00:55:16.900964 1432 omaha_request_action.cc:640] Omaha request network transfer failed. Sep 13 00:55:16.901069 update_engine[1432]: I0913 00:55:16.900978 1432 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 13 00:55:16.901069 update_engine[1432]: I0913 00:55:16.900982 1432 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:55:16.901069 update_engine[1432]: I0913 00:55:16.900987 1432 update_attempter.cc:306] Processing Done. Sep 13 00:55:16.901069 update_engine[1432]: E0913 00:55:16.901001 1432 update_attempter.cc:619] Update failed. Sep 13 00:55:16.901069 update_engine[1432]: I0913 00:55:16.901006 1432 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 13 00:55:16.901069 update_engine[1432]: I0913 00:55:16.901011 1432 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 13 00:55:16.901069 update_engine[1432]: I0913 00:55:16.901016 1432 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 13 00:55:16.901333 update_engine[1432]: I0913 00:55:16.901111 1432 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:55:16.901333 update_engine[1432]: I0913 00:55:16.901136 1432 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 13 00:55:16.901333 update_engine[1432]: I0913 00:55:16.901141 1432 omaha_request_action.cc:271] Request: Sep 13 00:55:16.901333 update_engine[1432]: Sep 13 00:55:16.901333 update_engine[1432]: Sep 13 00:55:16.901333 update_engine[1432]: Sep 13 00:55:16.901333 update_engine[1432]: Sep 13 00:55:16.901333 update_engine[1432]: Sep 13 00:55:16.901333 update_engine[1432]: Sep 13 00:55:16.901333 update_engine[1432]: I0913 00:55:16.901147 1432 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:55:16.901333 update_engine[1432]: I0913 00:55:16.901317 1432 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:55:16.901660 update_engine[1432]: I0913 00:55:16.901468 1432 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:55:16.901859 locksmithd[1522]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 13 00:55:16.906599 update_engine[1432]: E0913 00:55:16.906575 1432 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:55:16.906688 update_engine[1432]: I0913 00:55:16.906659 1432 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:55:16.906688 update_engine[1432]: I0913 00:55:16.906668 1432 omaha_request_action.cc:621] Omaha request response: Sep 13 00:55:16.906688 update_engine[1432]: I0913 00:55:16.906674 1432 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:55:16.906688 update_engine[1432]: I0913 00:55:16.906678 1432 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:55:16.906688 update_engine[1432]: I0913 00:55:16.906682 1432 update_attempter.cc:306] Processing Done. Sep 13 00:55:16.906688 update_engine[1432]: I0913 00:55:16.906687 1432 update_attempter.cc:310] Error event sent. Sep 13 00:55:16.906886 update_engine[1432]: I0913 00:55:16.906697 1432 update_check_scheduler.cc:74] Next update check in 46m51s Sep 13 00:55:16.907060 locksmithd[1522]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 13 00:55:18.402235 kubelet[2419]: I0913 00:55:18.402157 2419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6ksvh" podStartSLOduration=180.402137177 podStartE2EDuration="3m0.402137177s" podCreationTimestamp="2025-09-13 00:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:52:45.026327335 +0000 UTC m=+34.325627612" watchObservedRunningTime="2025-09-13 00:55:18.402137177 +0000 UTC m=+187.701437354" Sep 13 00:55:18.418478 env[1445]: time="2025-09-13T00:55:18.418433258Z" level=info msg="StopContainer for \"b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a\" with timeout 30 (s)" Sep 13 00:55:18.419318 env[1445]: time="2025-09-13T00:55:18.419270057Z" level=info msg="Stop container \"b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a\" with signal terminated" Sep 13 00:55:18.430869 systemd[1]: run-containerd-runc-k8s.io-2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978-runc.v1ETr3.mount: Deactivated successfully. Sep 13 00:55:18.444781 systemd[1]: cri-containerd-b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a.scope: Deactivated successfully. Sep 13 00:55:18.459785 env[1445]: time="2025-09-13T00:55:18.459728609Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:55:18.469208 env[1445]: time="2025-09-13T00:55:18.469175998Z" level=info msg="StopContainer for \"2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978\" with timeout 2 (s)" Sep 13 00:55:18.473770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a-rootfs.mount: Deactivated successfully. Sep 13 00:55:18.475089 env[1445]: time="2025-09-13T00:55:18.474999291Z" level=info msg="Stop container \"2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978\" with signal terminated" Sep 13 00:55:18.482384 systemd-networkd[1608]: lxc_health: Link DOWN Sep 13 00:55:18.482393 systemd-networkd[1608]: lxc_health: Lost carrier Sep 13 00:55:18.502375 systemd[1]: cri-containerd-2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978.scope: Deactivated successfully. Sep 13 00:55:18.502693 systemd[1]: cri-containerd-2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978.scope: Consumed 6.907s CPU time. Sep 13 00:55:18.527756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978-rootfs.mount: Deactivated successfully. Sep 13 00:55:18.529319 env[1445]: time="2025-09-13T00:55:18.529274027Z" level=info msg="shim disconnected" id=b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a Sep 13 00:55:18.529491 env[1445]: time="2025-09-13T00:55:18.529474226Z" level=warning msg="cleaning up after shim disconnected" id=b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a namespace=k8s.io Sep 13 00:55:18.529550 env[1445]: time="2025-09-13T00:55:18.529540226Z" level=info msg="cleaning up dead shim" Sep 13 00:55:18.537604 env[1445]: time="2025-09-13T00:55:18.537577917Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4027 runtime=io.containerd.runc.v2\n" Sep 13 00:55:18.538557 env[1445]: time="2025-09-13T00:55:18.538531516Z" level=info msg="shim disconnected" id=2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978 Sep 13 00:55:18.538694 env[1445]: time="2025-09-13T00:55:18.538680515Z" level=warning msg="cleaning up after shim disconnected" id=2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978 namespace=k8s.io Sep 13 00:55:18.538766 env[1445]: time="2025-09-13T00:55:18.538757615Z" level=info msg="cleaning up dead shim" Sep 13 00:55:18.541766 env[1445]: time="2025-09-13T00:55:18.541733212Z" level=info msg="StopContainer for \"b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a\" returns successfully" Sep 13 00:55:18.542678 env[1445]: time="2025-09-13T00:55:18.542647511Z" level=info msg="StopPodSandbox for \"f23953335f1e87f6bbaf95875f983bde8383b1eb975dc32f8adb26b1d5c33cbc\"" Sep 13 00:55:18.542885 env[1445]: time="2025-09-13T00:55:18.542852010Z" level=info msg="Container to stop \"b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:55:18.545390 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f23953335f1e87f6bbaf95875f983bde8383b1eb975dc32f8adb26b1d5c33cbc-shm.mount: Deactivated successfully. Sep 13 00:55:18.549910 env[1445]: time="2025-09-13T00:55:18.549887702Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4039 runtime=io.containerd.runc.v2\n" Sep 13 00:55:18.553781 systemd[1]: cri-containerd-f23953335f1e87f6bbaf95875f983bde8383b1eb975dc32f8adb26b1d5c33cbc.scope: Deactivated successfully. Sep 13 00:55:18.555512 env[1445]: time="2025-09-13T00:55:18.555488595Z" level=info msg="StopContainer for \"2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978\" returns successfully" Sep 13 00:55:18.556068 env[1445]: time="2025-09-13T00:55:18.556015195Z" level=info msg="StopPodSandbox for \"32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d\"" Sep 13 00:55:18.556250 env[1445]: time="2025-09-13T00:55:18.556227295Z" level=info msg="Container to stop \"3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:55:18.556349 env[1445]: time="2025-09-13T00:55:18.556333094Z" level=info msg="Container to stop \"dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:55:18.556441 env[1445]: time="2025-09-13T00:55:18.556412894Z" level=info msg="Container to stop \"d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:55:18.556504 env[1445]: time="2025-09-13T00:55:18.556489694Z" level=info msg="Container to stop \"2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:55:18.556584 env[1445]: time="2025-09-13T00:55:18.556568994Z" level=info msg="Container to stop \"bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:55:18.569356 systemd[1]: cri-containerd-32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d.scope: Deactivated successfully. Sep 13 00:55:18.602837 env[1445]: time="2025-09-13T00:55:18.602767139Z" level=info msg="shim disconnected" id=f23953335f1e87f6bbaf95875f983bde8383b1eb975dc32f8adb26b1d5c33cbc Sep 13 00:55:18.602837 env[1445]: time="2025-09-13T00:55:18.602821439Z" level=warning msg="cleaning up after shim disconnected" id=f23953335f1e87f6bbaf95875f983bde8383b1eb975dc32f8adb26b1d5c33cbc namespace=k8s.io Sep 13 00:55:18.602837 env[1445]: time="2025-09-13T00:55:18.602834739Z" level=info msg="cleaning up dead shim" Sep 13 00:55:18.603539 env[1445]: time="2025-09-13T00:55:18.603502339Z" level=info msg="shim disconnected" id=32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d Sep 13 00:55:18.603625 env[1445]: time="2025-09-13T00:55:18.603545438Z" level=warning msg="cleaning up after shim disconnected" id=32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d namespace=k8s.io Sep 13 00:55:18.603625 env[1445]: time="2025-09-13T00:55:18.603556538Z" level=info msg="cleaning up dead shim" Sep 13 00:55:18.615190 env[1445]: time="2025-09-13T00:55:18.615149125Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4088 runtime=io.containerd.runc.v2\n" Sep 13 00:55:18.615511 env[1445]: time="2025-09-13T00:55:18.615481924Z" level=info msg="TearDown network for sandbox \"f23953335f1e87f6bbaf95875f983bde8383b1eb975dc32f8adb26b1d5c33cbc\" successfully" Sep 13 00:55:18.615576 env[1445]: time="2025-09-13T00:55:18.615513524Z" level=info msg="StopPodSandbox for \"f23953335f1e87f6bbaf95875f983bde8383b1eb975dc32f8adb26b1d5c33cbc\" returns successfully" Sep 13 00:55:18.616470 env[1445]: time="2025-09-13T00:55:18.616445123Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4089 runtime=io.containerd.runc.v2\n" Sep 13 00:55:18.616859 env[1445]: time="2025-09-13T00:55:18.616837323Z" level=info msg="TearDown network for sandbox \"32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d\" successfully" Sep 13 00:55:18.616963 env[1445]: time="2025-09-13T00:55:18.616944523Z" level=info msg="StopPodSandbox for \"32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d\" returns successfully" Sep 13 00:55:18.698253 kubelet[2419]: I0913 00:55:18.698151 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9350ce8-2347-4fe8-9a54-2d10a54f4348-clustermesh-secrets\") pod \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " Sep 13 00:55:18.698493 kubelet[2419]: I0913 00:55:18.698468 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-host-proc-sys-net\") pod \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " Sep 13 00:55:18.698583 kubelet[2419]: I0913 00:55:18.698570 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-cilium-cgroup\") pod \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " Sep 13 00:55:18.698660 kubelet[2419]: I0913 00:55:18.698649 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-hostproc\") pod \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " Sep 13 00:55:18.698742 kubelet[2419]: I0913 00:55:18.698730 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9350ce8-2347-4fe8-9a54-2d10a54f4348-cilium-config-path\") pod \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " Sep 13 00:55:18.698816 kubelet[2419]: I0913 00:55:18.698804 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-host-proc-sys-kernel\") pod \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " Sep 13 00:55:18.698899 kubelet[2419]: I0913 00:55:18.698887 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-bpf-maps\") pod \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " Sep 13 00:55:18.698973 kubelet[2419]: I0913 00:55:18.698961 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-lib-modules\") pod \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " Sep 13 00:55:18.699071 kubelet[2419]: I0913 00:55:18.699057 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-xtables-lock\") pod \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " Sep 13 00:55:18.699160 kubelet[2419]: I0913 00:55:18.699148 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b9ee15b-6e16-481a-9c90-8dfb93741d9c-cilium-config-path\") pod \"8b9ee15b-6e16-481a-9c90-8dfb93741d9c\" (UID: \"8b9ee15b-6e16-481a-9c90-8dfb93741d9c\") " Sep 13 00:55:18.699240 kubelet[2419]: I0913 00:55:18.699228 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9350ce8-2347-4fe8-9a54-2d10a54f4348-hubble-tls\") pod \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " Sep 13 00:55:18.699322 kubelet[2419]: I0913 00:55:18.699310 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl85t\" (UniqueName: \"kubernetes.io/projected/e9350ce8-2347-4fe8-9a54-2d10a54f4348-kube-api-access-vl85t\") pod \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " Sep 13 00:55:18.701210 kubelet[2419]: I0913 00:55:18.701182 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-cni-path\") pod \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " Sep 13 00:55:18.701308 kubelet[2419]: I0913 00:55:18.701214 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-etc-cni-netd\") pod \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " Sep 13 00:55:18.701308 kubelet[2419]: I0913 00:55:18.701235 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-cilium-run\") pod \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\" (UID: \"e9350ce8-2347-4fe8-9a54-2d10a54f4348\") " Sep 13 00:55:18.701308 kubelet[2419]: I0913 00:55:18.701262 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtllp\" (UniqueName: \"kubernetes.io/projected/8b9ee15b-6e16-481a-9c90-8dfb93741d9c-kube-api-access-jtllp\") pod \"8b9ee15b-6e16-481a-9c90-8dfb93741d9c\" (UID: \"8b9ee15b-6e16-481a-9c90-8dfb93741d9c\") " Sep 13 00:55:18.701611 kubelet[2419]: I0913 00:55:18.701592 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e9350ce8-2347-4fe8-9a54-2d10a54f4348" (UID: "e9350ce8-2347-4fe8-9a54-2d10a54f4348"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:18.701727 kubelet[2419]: I0913 00:55:18.701713 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e9350ce8-2347-4fe8-9a54-2d10a54f4348" (UID: "e9350ce8-2347-4fe8-9a54-2d10a54f4348"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:18.701801 kubelet[2419]: I0913 00:55:18.701789 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e9350ce8-2347-4fe8-9a54-2d10a54f4348" (UID: "e9350ce8-2347-4fe8-9a54-2d10a54f4348"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:18.701868 kubelet[2419]: I0913 00:55:18.701856 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-hostproc" (OuterVolumeSpecName: "hostproc") pod "e9350ce8-2347-4fe8-9a54-2d10a54f4348" (UID: "e9350ce8-2347-4fe8-9a54-2d10a54f4348"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:18.702014 kubelet[2419]: I0913 00:55:18.701993 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e9350ce8-2347-4fe8-9a54-2d10a54f4348" (UID: "e9350ce8-2347-4fe8-9a54-2d10a54f4348"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:18.702091 kubelet[2419]: I0913 00:55:18.702044 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e9350ce8-2347-4fe8-9a54-2d10a54f4348" (UID: "e9350ce8-2347-4fe8-9a54-2d10a54f4348"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:18.704528 kubelet[2419]: I0913 00:55:18.704506 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e9350ce8-2347-4fe8-9a54-2d10a54f4348" (UID: "e9350ce8-2347-4fe8-9a54-2d10a54f4348"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:18.705412 kubelet[2419]: I0913 00:55:18.705386 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9350ce8-2347-4fe8-9a54-2d10a54f4348-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e9350ce8-2347-4fe8-9a54-2d10a54f4348" (UID: "e9350ce8-2347-4fe8-9a54-2d10a54f4348"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:55:18.705531 kubelet[2419]: I0913 00:55:18.705479 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9350ce8-2347-4fe8-9a54-2d10a54f4348-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e9350ce8-2347-4fe8-9a54-2d10a54f4348" (UID: "e9350ce8-2347-4fe8-9a54-2d10a54f4348"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:55:18.705531 kubelet[2419]: I0913 00:55:18.705512 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-cni-path" (OuterVolumeSpecName: "cni-path") pod "e9350ce8-2347-4fe8-9a54-2d10a54f4348" (UID: "e9350ce8-2347-4fe8-9a54-2d10a54f4348"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:18.705735 kubelet[2419]: I0913 00:55:18.705697 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b9ee15b-6e16-481a-9c90-8dfb93741d9c-kube-api-access-jtllp" (OuterVolumeSpecName: "kube-api-access-jtllp") pod "8b9ee15b-6e16-481a-9c90-8dfb93741d9c" (UID: "8b9ee15b-6e16-481a-9c90-8dfb93741d9c"). InnerVolumeSpecName "kube-api-access-jtllp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:55:18.705844 kubelet[2419]: I0913 00:55:18.705829 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e9350ce8-2347-4fe8-9a54-2d10a54f4348" (UID: "e9350ce8-2347-4fe8-9a54-2d10a54f4348"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:18.705943 kubelet[2419]: I0913 00:55:18.705929 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e9350ce8-2347-4fe8-9a54-2d10a54f4348" (UID: "e9350ce8-2347-4fe8-9a54-2d10a54f4348"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:18.708419 kubelet[2419]: I0913 00:55:18.708392 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9350ce8-2347-4fe8-9a54-2d10a54f4348-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e9350ce8-2347-4fe8-9a54-2d10a54f4348" (UID: "e9350ce8-2347-4fe8-9a54-2d10a54f4348"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:55:18.708683 kubelet[2419]: I0913 00:55:18.708660 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b9ee15b-6e16-481a-9c90-8dfb93741d9c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8b9ee15b-6e16-481a-9c90-8dfb93741d9c" (UID: "8b9ee15b-6e16-481a-9c90-8dfb93741d9c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:55:18.710751 kubelet[2419]: I0913 00:55:18.710725 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9350ce8-2347-4fe8-9a54-2d10a54f4348-kube-api-access-vl85t" (OuterVolumeSpecName: "kube-api-access-vl85t") pod "e9350ce8-2347-4fe8-9a54-2d10a54f4348" (UID: "e9350ce8-2347-4fe8-9a54-2d10a54f4348"). InnerVolumeSpecName "kube-api-access-vl85t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:55:18.801674 kubelet[2419]: I0913 00:55:18.801636 2419 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-lib-modules\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:18.801674 kubelet[2419]: I0913 00:55:18.801667 2419 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-xtables-lock\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:18.801674 kubelet[2419]: I0913 00:55:18.801680 2419 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b9ee15b-6e16-481a-9c90-8dfb93741d9c-cilium-config-path\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:18.801925 kubelet[2419]: I0913 00:55:18.801725 2419 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9350ce8-2347-4fe8-9a54-2d10a54f4348-hubble-tls\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:18.801925 kubelet[2419]: I0913 00:55:18.801737 2419 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vl85t\" (UniqueName: \"kubernetes.io/projected/e9350ce8-2347-4fe8-9a54-2d10a54f4348-kube-api-access-vl85t\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:18.801925 kubelet[2419]: I0913 00:55:18.801749 2419 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-cni-path\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:18.801925 kubelet[2419]: I0913 00:55:18.801761 2419 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-etc-cni-netd\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:18.801925 kubelet[2419]: I0913 00:55:18.801772 2419 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-cilium-run\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:18.801925 kubelet[2419]: I0913 00:55:18.801782 2419 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jtllp\" (UniqueName: \"kubernetes.io/projected/8b9ee15b-6e16-481a-9c90-8dfb93741d9c-kube-api-access-jtllp\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:18.801925 kubelet[2419]: I0913 00:55:18.801792 2419 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9350ce8-2347-4fe8-9a54-2d10a54f4348-clustermesh-secrets\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:18.801925 kubelet[2419]: I0913 00:55:18.801805 2419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-host-proc-sys-net\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:18.802180 kubelet[2419]: I0913 00:55:18.801817 2419 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-cilium-cgroup\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:18.802180 kubelet[2419]: I0913 00:55:18.801827 2419 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-hostproc\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:18.802180 kubelet[2419]: I0913 00:55:18.801838 2419 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9350ce8-2347-4fe8-9a54-2d10a54f4348-cilium-config-path\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:18.802180 kubelet[2419]: I0913 00:55:18.801850 2419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:18.802180 kubelet[2419]: I0913 00:55:18.801863 2419 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9350ce8-2347-4fe8-9a54-2d10a54f4348-bpf-maps\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:18.810094 systemd[1]: Removed slice kubepods-burstable-pode9350ce8_2347_4fe8_9a54_2d10a54f4348.slice. Sep 13 00:55:18.810213 systemd[1]: kubepods-burstable-pode9350ce8_2347_4fe8_9a54_2d10a54f4348.slice: Consumed 7.010s CPU time. Sep 13 00:55:18.811522 systemd[1]: Removed slice kubepods-besteffort-pod8b9ee15b_6e16_481a_9c90_8dfb93741d9c.slice. Sep 13 00:55:19.226401 kubelet[2419]: I0913 00:55:19.226374 2419 scope.go:117] "RemoveContainer" containerID="b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a" Sep 13 00:55:19.229563 env[1445]: time="2025-09-13T00:55:19.229511109Z" level=info msg="RemoveContainer for \"b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a\"" Sep 13 00:55:19.240096 env[1445]: time="2025-09-13T00:55:19.239973297Z" level=info msg="RemoveContainer for \"b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a\" returns successfully" Sep 13 00:55:19.240429 kubelet[2419]: I0913 00:55:19.240412 2419 scope.go:117] "RemoveContainer" containerID="b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a" Sep 13 00:55:19.240828 env[1445]: time="2025-09-13T00:55:19.240739497Z" level=error msg="ContainerStatus for \"b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a\": not found" Sep 13 00:55:19.241003 kubelet[2419]: E0913 00:55:19.240962 2419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a\": not found" containerID="b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a" Sep 13 00:55:19.241103 kubelet[2419]: I0913 00:55:19.241013 2419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a"} err="failed to get container status \"b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2bdbcde03b28a6e183da965261c0debd9f54c4a826e5b3cc95ee3ef6311274a\": not found" Sep 13 00:55:19.241103 kubelet[2419]: I0913 00:55:19.241065 2419 scope.go:117] "RemoveContainer" containerID="2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978" Sep 13 00:55:19.242188 env[1445]: time="2025-09-13T00:55:19.242150495Z" level=info msg="RemoveContainer for \"2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978\"" Sep 13 00:55:19.249219 env[1445]: time="2025-09-13T00:55:19.249187987Z" level=info msg="RemoveContainer for \"2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978\" returns successfully" Sep 13 00:55:19.249375 kubelet[2419]: I0913 00:55:19.249361 2419 scope.go:117] "RemoveContainer" containerID="d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3" Sep 13 00:55:19.250585 env[1445]: time="2025-09-13T00:55:19.250558685Z" level=info msg="RemoveContainer for \"d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3\"" Sep 13 00:55:19.257638 env[1445]: time="2025-09-13T00:55:19.257600977Z" level=info msg="RemoveContainer for \"d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3\" returns successfully" Sep 13 00:55:19.257990 kubelet[2419]: I0913 00:55:19.257975 2419 scope.go:117] "RemoveContainer" containerID="dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05" Sep 13 00:55:19.259166 env[1445]: time="2025-09-13T00:55:19.259141676Z" level=info msg="RemoveContainer for \"dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05\"" Sep 13 00:55:19.266593 env[1445]: time="2025-09-13T00:55:19.266560267Z" level=info msg="RemoveContainer for \"dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05\" returns successfully" Sep 13 00:55:19.266748 kubelet[2419]: I0913 00:55:19.266731 2419 scope.go:117] "RemoveContainer" containerID="3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051" Sep 13 00:55:19.269465 env[1445]: time="2025-09-13T00:55:19.267678566Z" level=info msg="RemoveContainer for \"3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051\"" Sep 13 00:55:19.273327 env[1445]: time="2025-09-13T00:55:19.273297960Z" level=info msg="RemoveContainer for \"3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051\" returns successfully" Sep 13 00:55:19.273503 kubelet[2419]: I0913 00:55:19.273484 2419 scope.go:117] "RemoveContainer" containerID="bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3" Sep 13 00:55:19.274472 env[1445]: time="2025-09-13T00:55:19.274444758Z" level=info msg="RemoveContainer for \"bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3\"" Sep 13 00:55:19.279906 env[1445]: time="2025-09-13T00:55:19.279876652Z" level=info msg="RemoveContainer for \"bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3\" returns successfully" Sep 13 00:55:19.280067 kubelet[2419]: I0913 00:55:19.280050 2419 scope.go:117] "RemoveContainer" containerID="2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978" Sep 13 00:55:19.280324 env[1445]: time="2025-09-13T00:55:19.280269052Z" level=error msg="ContainerStatus for \"2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978\": not found" Sep 13 00:55:19.280442 kubelet[2419]: E0913 00:55:19.280418 2419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978\": not found" containerID="2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978" Sep 13 00:55:19.280509 kubelet[2419]: I0913 00:55:19.280447 2419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978"} err="failed to get container status \"2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978\": rpc error: code = NotFound desc = an error occurred when try to find container \"2bc62a6a22e901ddce6026187ef7734d954b370c93f82af9787d66f9a90bc978\": not found" Sep 13 00:55:19.280509 kubelet[2419]: I0913 00:55:19.280470 2419 scope.go:117] "RemoveContainer" containerID="d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3" Sep 13 00:55:19.281219 env[1445]: time="2025-09-13T00:55:19.280936251Z" level=error msg="ContainerStatus for \"d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3\": not found" Sep 13 00:55:19.285094 kubelet[2419]: E0913 00:55:19.285069 2419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3\": not found" containerID="d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3" Sep 13 00:55:19.285177 kubelet[2419]: I0913 00:55:19.285105 2419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3"} err="failed to get container status \"d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d2dd39a91622708ef236be57f5c89da0ad981bbd711b1a34755317a13bf951a3\": not found" Sep 13 00:55:19.285177 kubelet[2419]: I0913 00:55:19.285124 2419 scope.go:117] "RemoveContainer" containerID="dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05" Sep 13 00:55:19.285554 env[1445]: time="2025-09-13T00:55:19.285498546Z" level=error msg="ContainerStatus for \"dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05\": not found" Sep 13 00:55:19.286137 kubelet[2419]: E0913 00:55:19.286114 2419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05\": not found" containerID="dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05" Sep 13 00:55:19.286213 kubelet[2419]: I0913 00:55:19.286145 2419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05"} err="failed to get container status \"dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd81048b25ae79a7d2100b2e9fba71bddc322f447d71b646e289d509cb6f6d05\": not found" Sep 13 00:55:19.286213 kubelet[2419]: I0913 00:55:19.286166 2419 scope.go:117] "RemoveContainer" containerID="3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051" Sep 13 00:55:19.286792 env[1445]: time="2025-09-13T00:55:19.286738145Z" level=error msg="ContainerStatus for \"3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051\": not found" Sep 13 00:55:19.286884 kubelet[2419]: E0913 00:55:19.286864 2419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051\": not found" containerID="3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051" Sep 13 00:55:19.286944 kubelet[2419]: I0913 00:55:19.286892 2419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051"} err="failed to get container status \"3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b0832fe5cbe37658d63eae2f42c9892aa50713ffd794cc688a660a23946f051\": not found" Sep 13 00:55:19.286944 kubelet[2419]: I0913 00:55:19.286910 2419 scope.go:117] "RemoveContainer" containerID="bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3" Sep 13 00:55:19.287129 env[1445]: time="2025-09-13T00:55:19.287080844Z" level=error msg="ContainerStatus for \"bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3\": not found" Sep 13 00:55:19.287225 kubelet[2419]: E0913 00:55:19.287205 2419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3\": not found" containerID="bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3" Sep 13 00:55:19.287285 kubelet[2419]: I0913 00:55:19.287233 2419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3"} err="failed to get container status \"bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcbe89112a636d327e1f885cd631e10625d4b0ad6749e860be68d469de1e88c3\": not found" Sep 13 00:55:19.426350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f23953335f1e87f6bbaf95875f983bde8383b1eb975dc32f8adb26b1d5c33cbc-rootfs.mount: Deactivated successfully. Sep 13 00:55:19.426521 systemd[1]: var-lib-kubelet-pods-8b9ee15b\x2d6e16\x2d481a\x2d9c90\x2d8dfb93741d9c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djtllp.mount: Deactivated successfully. Sep 13 00:55:19.426609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d-rootfs.mount: Deactivated successfully. Sep 13 00:55:19.426682 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-32ef8293076a0d53df0b49057622cf85d19b47397622a2807d82a9fd3031106d-shm.mount: Deactivated successfully. Sep 13 00:55:19.426759 systemd[1]: var-lib-kubelet-pods-e9350ce8\x2d2347\x2d4fe8\x2d9a54\x2d2d10a54f4348-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvl85t.mount: Deactivated successfully. Sep 13 00:55:19.426836 systemd[1]: var-lib-kubelet-pods-e9350ce8\x2d2347\x2d4fe8\x2d9a54\x2d2d10a54f4348-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:55:19.426921 systemd[1]: var-lib-kubelet-pods-e9350ce8\x2d2347\x2d4fe8\x2d9a54\x2d2d10a54f4348-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:55:20.551576 sshd[3958]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:20.554464 systemd[1]: sshd@20-10.200.4.42:22-10.200.16.10:46156.service: Deactivated successfully. Sep 13 00:55:20.555366 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:55:20.556060 systemd-logind[1431]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:55:20.556896 systemd-logind[1431]: Removed session 23. Sep 13 00:55:20.662147 systemd[1]: Started sshd@21-10.200.4.42:22-10.200.16.10:43572.service. Sep 13 00:55:20.805962 kubelet[2419]: I0913 00:55:20.805378 2419 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b9ee15b-6e16-481a-9c90-8dfb93741d9c" path="/var/lib/kubelet/pods/8b9ee15b-6e16-481a-9c90-8dfb93741d9c/volumes" Sep 13 00:55:20.805962 kubelet[2419]: I0913 00:55:20.805900 2419 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9350ce8-2347-4fe8-9a54-2d10a54f4348" path="/var/lib/kubelet/pods/e9350ce8-2347-4fe8-9a54-2d10a54f4348/volumes" Sep 13 00:55:20.890918 kubelet[2419]: E0913 00:55:20.890831 2419 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:55:21.255481 sshd[4123]: Accepted publickey for core from 10.200.16.10 port 43572 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:55:21.256953 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:21.261645 systemd[1]: Started session-24.scope. Sep 13 00:55:21.262209 systemd-logind[1431]: New session 24 of user core. Sep 13 00:55:22.288344 systemd[1]: Created slice kubepods-burstable-pod5de42f08_bea6_4365_a6fa_aa8f2bb408e7.slice. Sep 13 00:55:22.321889 kubelet[2419]: I0913 00:55:22.321854 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-lib-modules\") pod \"cilium-dzm5h\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " pod="kube-system/cilium-dzm5h" Sep 13 00:55:22.322312 kubelet[2419]: I0913 00:55:22.321904 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-clustermesh-secrets\") pod \"cilium-dzm5h\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " pod="kube-system/cilium-dzm5h" Sep 13 00:55:22.322312 kubelet[2419]: I0913 00:55:22.321927 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skq2n\" (UniqueName: \"kubernetes.io/projected/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-kube-api-access-skq2n\") pod \"cilium-dzm5h\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " pod="kube-system/cilium-dzm5h" Sep 13 00:55:22.322312 kubelet[2419]: I0913 00:55:22.321950 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-xtables-lock\") pod \"cilium-dzm5h\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " pod="kube-system/cilium-dzm5h" Sep 13 00:55:22.322312 kubelet[2419]: I0913 00:55:22.321982 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-hostproc\") pod \"cilium-dzm5h\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " pod="kube-system/cilium-dzm5h" Sep 13 00:55:22.322312 kubelet[2419]: I0913 00:55:22.322006 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cilium-cgroup\") pod \"cilium-dzm5h\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " pod="kube-system/cilium-dzm5h" Sep 13 00:55:22.322312 kubelet[2419]: I0913 00:55:22.322053 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cni-path\") pod \"cilium-dzm5h\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " pod="kube-system/cilium-dzm5h" Sep 13 00:55:22.322493 kubelet[2419]: I0913 00:55:22.322074 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cilium-config-path\") pod \"cilium-dzm5h\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " pod="kube-system/cilium-dzm5h" Sep 13 00:55:22.322493 kubelet[2419]: I0913 00:55:22.322098 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-host-proc-sys-net\") pod \"cilium-dzm5h\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " pod="kube-system/cilium-dzm5h" Sep 13 00:55:22.322493 kubelet[2419]: I0913 00:55:22.322135 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cilium-run\") pod \"cilium-dzm5h\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " pod="kube-system/cilium-dzm5h" Sep 13 00:55:22.322493 kubelet[2419]: I0913 00:55:22.322159 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-host-proc-sys-kernel\") pod \"cilium-dzm5h\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " pod="kube-system/cilium-dzm5h" Sep 13 00:55:22.322493 kubelet[2419]: I0913 00:55:22.322184 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cilium-ipsec-secrets\") pod \"cilium-dzm5h\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " pod="kube-system/cilium-dzm5h" Sep 13 00:55:22.322598 kubelet[2419]: I0913 00:55:22.322219 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-etc-cni-netd\") pod \"cilium-dzm5h\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " pod="kube-system/cilium-dzm5h" Sep 13 00:55:22.322598 kubelet[2419]: I0913 00:55:22.322239 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-hubble-tls\") pod \"cilium-dzm5h\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " pod="kube-system/cilium-dzm5h" Sep 13 00:55:22.322598 kubelet[2419]: I0913 00:55:22.322263 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-bpf-maps\") pod \"cilium-dzm5h\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " pod="kube-system/cilium-dzm5h" Sep 13 00:55:22.358163 sshd[4123]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:22.361173 systemd[1]: sshd@21-10.200.4.42:22-10.200.16.10:43572.service: Deactivated successfully. Sep 13 00:55:22.362130 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:55:22.362801 systemd-logind[1431]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:55:22.363844 systemd-logind[1431]: Removed session 24. Sep 13 00:55:22.469193 systemd[1]: Started sshd@22-10.200.4.42:22-10.200.16.10:43578.service. Sep 13 00:55:22.593088 env[1445]: time="2025-09-13T00:55:22.593023972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dzm5h,Uid:5de42f08-bea6-4365-a6fa-aa8f2bb408e7,Namespace:kube-system,Attempt:0,}" Sep 13 00:55:22.621016 env[1445]: time="2025-09-13T00:55:22.620946145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:22.621016 env[1445]: time="2025-09-13T00:55:22.620982445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:22.621016 env[1445]: time="2025-09-13T00:55:22.620997645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:22.621428 env[1445]: time="2025-09-13T00:55:22.621387245Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/98d2371e05787a47cb6a01e754731238dc46a03fcc25f70f5845de7be29bca69 pid=4148 runtime=io.containerd.runc.v2 Sep 13 00:55:22.632982 systemd[1]: Started cri-containerd-98d2371e05787a47cb6a01e754731238dc46a03fcc25f70f5845de7be29bca69.scope. Sep 13 00:55:22.657087 env[1445]: time="2025-09-13T00:55:22.656448111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dzm5h,Uid:5de42f08-bea6-4365-a6fa-aa8f2bb408e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"98d2371e05787a47cb6a01e754731238dc46a03fcc25f70f5845de7be29bca69\"" Sep 13 00:55:22.665002 env[1445]: time="2025-09-13T00:55:22.664968803Z" level=info msg="CreateContainer within sandbox \"98d2371e05787a47cb6a01e754731238dc46a03fcc25f70f5845de7be29bca69\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:55:22.686996 env[1445]: time="2025-09-13T00:55:22.686957882Z" level=info msg="CreateContainer within sandbox \"98d2371e05787a47cb6a01e754731238dc46a03fcc25f70f5845de7be29bca69\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3\"" Sep 13 00:55:22.687489 env[1445]: time="2025-09-13T00:55:22.687462581Z" level=info msg="StartContainer for \"8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3\"" Sep 13 00:55:22.703755 systemd[1]: Started cri-containerd-8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3.scope. Sep 13 00:55:22.718659 systemd[1]: cri-containerd-8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3.scope: Deactivated successfully. Sep 13 00:55:22.786476 env[1445]: time="2025-09-13T00:55:22.786415186Z" level=info msg="shim disconnected" id=8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3 Sep 13 00:55:22.786476 env[1445]: time="2025-09-13T00:55:22.786475786Z" level=warning msg="cleaning up after shim disconnected" id=8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3 namespace=k8s.io Sep 13 00:55:22.786762 env[1445]: time="2025-09-13T00:55:22.786487286Z" level=info msg="cleaning up dead shim" Sep 13 00:55:22.794390 env[1445]: time="2025-09-13T00:55:22.794346978Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4206 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T00:55:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 13 00:55:22.794707 env[1445]: time="2025-09-13T00:55:22.794609878Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Sep 13 00:55:22.797145 env[1445]: time="2025-09-13T00:55:22.797102775Z" level=error msg="Failed to pipe stdout of container \"8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3\"" error="reading from a closed fifo" Sep 13 00:55:22.797314 env[1445]: time="2025-09-13T00:55:22.797285975Z" level=error msg="Failed to pipe stderr of container \"8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3\"" error="reading from a closed fifo" Sep 13 00:55:22.801628 env[1445]: time="2025-09-13T00:55:22.801580571Z" level=error msg="StartContainer for \"8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 13 00:55:22.801880 kubelet[2419]: E0913 00:55:22.801843 2419 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3" Sep 13 00:55:22.802324 kubelet[2419]: E0913 00:55:22.802055 2419 kuberuntime_manager.go:1358] "Unhandled Error" err=< Sep 13 00:55:22.802324 kubelet[2419]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 13 00:55:22.802324 kubelet[2419]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 13 00:55:22.802324 kubelet[2419]: rm /hostbin/cilium-mount Sep 13 00:55:22.802583 kubelet[2419]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-skq2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-dzm5h_kube-system(5de42f08-bea6-4365-a6fa-aa8f2bb408e7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 13 00:55:22.802583 kubelet[2419]: > logger="UnhandledError" Sep 13 00:55:22.803984 kubelet[2419]: E0913 00:55:22.803953 2419 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dzm5h" podUID="5de42f08-bea6-4365-a6fa-aa8f2bb408e7" Sep 13 00:55:23.082543 sshd[4138]: Accepted publickey for core from 10.200.16.10 port 43578 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:55:23.083817 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:23.088094 systemd-logind[1431]: New session 25 of user core. Sep 13 00:55:23.088371 systemd[1]: Started session-25.scope. Sep 13 00:55:23.249070 env[1445]: time="2025-09-13T00:55:23.244975756Z" level=info msg="CreateContainer within sandbox \"98d2371e05787a47cb6a01e754731238dc46a03fcc25f70f5845de7be29bca69\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Sep 13 00:55:23.274810 env[1445]: time="2025-09-13T00:55:23.274764728Z" level=info msg="CreateContainer within sandbox \"98d2371e05787a47cb6a01e754731238dc46a03fcc25f70f5845de7be29bca69\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"2304f231b65b3f76e5c458296d037fcf41059b6856dd7608df248b55d44dab7d\"" Sep 13 00:55:23.275479 env[1445]: time="2025-09-13T00:55:23.275445628Z" level=info msg="StartContainer for \"2304f231b65b3f76e5c458296d037fcf41059b6856dd7608df248b55d44dab7d\"" Sep 13 00:55:23.292225 systemd[1]: Started cri-containerd-2304f231b65b3f76e5c458296d037fcf41059b6856dd7608df248b55d44dab7d.scope. Sep 13 00:55:23.301936 systemd[1]: cri-containerd-2304f231b65b3f76e5c458296d037fcf41059b6856dd7608df248b55d44dab7d.scope: Deactivated successfully. Sep 13 00:55:23.302250 systemd[1]: Stopped cri-containerd-2304f231b65b3f76e5c458296d037fcf41059b6856dd7608df248b55d44dab7d.scope. Sep 13 00:55:23.319209 env[1445]: time="2025-09-13T00:55:23.319083488Z" level=info msg="shim disconnected" id=2304f231b65b3f76e5c458296d037fcf41059b6856dd7608df248b55d44dab7d Sep 13 00:55:23.319443 env[1445]: time="2025-09-13T00:55:23.319421988Z" level=warning msg="cleaning up after shim disconnected" id=2304f231b65b3f76e5c458296d037fcf41059b6856dd7608df248b55d44dab7d namespace=k8s.io Sep 13 00:55:23.319533 env[1445]: time="2025-09-13T00:55:23.319519788Z" level=info msg="cleaning up dead shim" Sep 13 00:55:23.326511 env[1445]: time="2025-09-13T00:55:23.326473381Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4244 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T00:55:23Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2304f231b65b3f76e5c458296d037fcf41059b6856dd7608df248b55d44dab7d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 13 00:55:23.326766 env[1445]: time="2025-09-13T00:55:23.326711581Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Sep 13 00:55:23.326944 env[1445]: time="2025-09-13T00:55:23.326903481Z" level=error msg="Failed to pipe stdout of container \"2304f231b65b3f76e5c458296d037fcf41059b6856dd7608df248b55d44dab7d\"" error="reading from a closed fifo" Sep 13 00:55:23.327105 env[1445]: time="2025-09-13T00:55:23.327075681Z" level=error msg="Failed to pipe stderr of container \"2304f231b65b3f76e5c458296d037fcf41059b6856dd7608df248b55d44dab7d\"" error="reading from a closed fifo" Sep 13 00:55:23.331958 env[1445]: time="2025-09-13T00:55:23.331917376Z" level=error msg="StartContainer for \"2304f231b65b3f76e5c458296d037fcf41059b6856dd7608df248b55d44dab7d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 13 00:55:23.332145 kubelet[2419]: E0913 00:55:23.332107 2419 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2304f231b65b3f76e5c458296d037fcf41059b6856dd7608df248b55d44dab7d" Sep 13 00:55:23.332475 kubelet[2419]: E0913 00:55:23.332237 2419 kuberuntime_manager.go:1358] "Unhandled Error" err=< Sep 13 00:55:23.332475 kubelet[2419]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 13 00:55:23.332475 kubelet[2419]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 13 00:55:23.332475 kubelet[2419]: rm /hostbin/cilium-mount Sep 13 00:55:23.332475 kubelet[2419]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-skq2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-dzm5h_kube-system(5de42f08-bea6-4365-a6fa-aa8f2bb408e7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 13 00:55:23.332475 kubelet[2419]: > logger="UnhandledError" Sep 13 00:55:23.334842 kubelet[2419]: E0913 00:55:23.333869 2419 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dzm5h" podUID="5de42f08-bea6-4365-a6fa-aa8f2bb408e7" Sep 13 00:55:23.586713 sshd[4138]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:23.589670 systemd[1]: sshd@22-10.200.4.42:22-10.200.16.10:43578.service: Deactivated successfully. Sep 13 00:55:23.590483 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:55:23.591086 systemd-logind[1431]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:55:23.592006 systemd-logind[1431]: Removed session 25. Sep 13 00:55:23.685585 systemd[1]: Started sshd@23-10.200.4.42:22-10.200.16.10:43582.service. Sep 13 00:55:24.243772 kubelet[2419]: I0913 00:55:24.243741 2419 scope.go:117] "RemoveContainer" containerID="8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3" Sep 13 00:55:24.245001 env[1445]: time="2025-09-13T00:55:24.244967455Z" level=info msg="StopPodSandbox for \"98d2371e05787a47cb6a01e754731238dc46a03fcc25f70f5845de7be29bca69\"" Sep 13 00:55:24.245541 env[1445]: time="2025-09-13T00:55:24.245514955Z" level=info msg="Container to stop \"2304f231b65b3f76e5c458296d037fcf41059b6856dd7608df248b55d44dab7d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:55:24.245627 env[1445]: time="2025-09-13T00:55:24.245609055Z" level=info msg="Container to stop \"8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:55:24.248709 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98d2371e05787a47cb6a01e754731238dc46a03fcc25f70f5845de7be29bca69-shm.mount: Deactivated successfully. Sep 13 00:55:24.259051 env[1445]: time="2025-09-13T00:55:24.255646746Z" level=info msg="RemoveContainer for \"8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3\"" Sep 13 00:55:24.263119 systemd[1]: cri-containerd-98d2371e05787a47cb6a01e754731238dc46a03fcc25f70f5845de7be29bca69.scope: Deactivated successfully. Sep 13 00:55:24.266482 env[1445]: time="2025-09-13T00:55:24.266446437Z" level=info msg="RemoveContainer for \"8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3\" returns successfully" Sep 13 00:55:24.275859 sshd[4265]: Accepted publickey for core from 10.200.16.10 port 43582 ssh2: RSA SHA256:zK3kxTPXsdaCY/XytugRgS+7VrhsOEAnV/FpwU6+RkI Sep 13 00:55:24.275613 sshd[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:24.282507 systemd[1]: Started session-26.scope. Sep 13 00:55:24.283111 systemd-logind[1431]: New session 26 of user core. Sep 13 00:55:24.296333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98d2371e05787a47cb6a01e754731238dc46a03fcc25f70f5845de7be29bca69-rootfs.mount: Deactivated successfully. Sep 13 00:55:24.315636 env[1445]: time="2025-09-13T00:55:24.315592194Z" level=info msg="shim disconnected" id=98d2371e05787a47cb6a01e754731238dc46a03fcc25f70f5845de7be29bca69 Sep 13 00:55:24.315884 env[1445]: time="2025-09-13T00:55:24.315864994Z" level=warning msg="cleaning up after shim disconnected" id=98d2371e05787a47cb6a01e754731238dc46a03fcc25f70f5845de7be29bca69 namespace=k8s.io Sep 13 00:55:24.315972 env[1445]: time="2025-09-13T00:55:24.315961494Z" level=info msg="cleaning up dead shim" Sep 13 00:55:24.330129 env[1445]: time="2025-09-13T00:55:24.330091182Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4286 runtime=io.containerd.runc.v2\n" Sep 13 00:55:24.330446 env[1445]: time="2025-09-13T00:55:24.330413382Z" level=info msg="TearDown network for sandbox \"98d2371e05787a47cb6a01e754731238dc46a03fcc25f70f5845de7be29bca69\" successfully" Sep 13 00:55:24.330511 env[1445]: time="2025-09-13T00:55:24.330450282Z" level=info msg="StopPodSandbox for \"98d2371e05787a47cb6a01e754731238dc46a03fcc25f70f5845de7be29bca69\" returns successfully" Sep 13 00:55:24.434672 kubelet[2419]: I0913 00:55:24.434636 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skq2n\" (UniqueName: \"kubernetes.io/projected/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-kube-api-access-skq2n\") pod \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " Sep 13 00:55:24.435150 kubelet[2419]: I0913 00:55:24.434681 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-hostproc\") pod \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " Sep 13 00:55:24.435150 kubelet[2419]: I0913 00:55:24.434728 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cilium-ipsec-secrets\") pod \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " Sep 13 00:55:24.435150 kubelet[2419]: I0913 00:55:24.434750 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cilium-run\") pod \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " Sep 13 00:55:24.435150 kubelet[2419]: I0913 00:55:24.434796 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-hostproc" (OuterVolumeSpecName: "hostproc") pod "5de42f08-bea6-4365-a6fa-aa8f2bb408e7" (UID: "5de42f08-bea6-4365-a6fa-aa8f2bb408e7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:24.435325 kubelet[2419]: I0913 00:55:24.435163 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-hubble-tls\") pod \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " Sep 13 00:55:24.435325 kubelet[2419]: I0913 00:55:24.435196 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-clustermesh-secrets\") pod \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " Sep 13 00:55:24.435325 kubelet[2419]: I0913 00:55:24.435237 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cilium-cgroup\") pod \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " Sep 13 00:55:24.435325 kubelet[2419]: I0913 00:55:24.435260 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-host-proc-sys-kernel\") pod \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " Sep 13 00:55:24.435325 kubelet[2419]: I0913 00:55:24.435300 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-bpf-maps\") pod \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " Sep 13 00:55:24.435325 kubelet[2419]: I0913 00:55:24.435323 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-host-proc-sys-net\") pod \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " Sep 13 00:55:24.435559 kubelet[2419]: I0913 00:55:24.435344 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-etc-cni-netd\") pod \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " Sep 13 00:55:24.435559 kubelet[2419]: I0913 00:55:24.435376 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-lib-modules\") pod \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " Sep 13 00:55:24.435559 kubelet[2419]: I0913 00:55:24.435397 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-xtables-lock\") pod \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " Sep 13 00:55:24.435559 kubelet[2419]: I0913 00:55:24.435417 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cni-path\") pod \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " Sep 13 00:55:24.435559 kubelet[2419]: I0913 00:55:24.435458 2419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cilium-config-path\") pod \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\" (UID: \"5de42f08-bea6-4365-a6fa-aa8f2bb408e7\") " Sep 13 00:55:24.435559 kubelet[2419]: I0913 00:55:24.435512 2419 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-hostproc\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:24.438300 kubelet[2419]: I0913 00:55:24.438270 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5de42f08-bea6-4365-a6fa-aa8f2bb408e7" (UID: "5de42f08-bea6-4365-a6fa-aa8f2bb408e7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:55:24.438410 kubelet[2419]: I0913 00:55:24.438320 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5de42f08-bea6-4365-a6fa-aa8f2bb408e7" (UID: "5de42f08-bea6-4365-a6fa-aa8f2bb408e7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:24.442883 kubelet[2419]: I0913 00:55:24.442846 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5de42f08-bea6-4365-a6fa-aa8f2bb408e7" (UID: "5de42f08-bea6-4365-a6fa-aa8f2bb408e7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:24.443042 kubelet[2419]: I0913 00:55:24.443011 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5de42f08-bea6-4365-a6fa-aa8f2bb408e7" (UID: "5de42f08-bea6-4365-a6fa-aa8f2bb408e7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:24.443142 kubelet[2419]: I0913 00:55:24.443128 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5de42f08-bea6-4365-a6fa-aa8f2bb408e7" (UID: "5de42f08-bea6-4365-a6fa-aa8f2bb408e7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:24.443228 kubelet[2419]: I0913 00:55:24.443217 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5de42f08-bea6-4365-a6fa-aa8f2bb408e7" (UID: "5de42f08-bea6-4365-a6fa-aa8f2bb408e7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:24.443313 kubelet[2419]: I0913 00:55:24.443302 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5de42f08-bea6-4365-a6fa-aa8f2bb408e7" (UID: "5de42f08-bea6-4365-a6fa-aa8f2bb408e7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:24.443394 kubelet[2419]: I0913 00:55:24.443384 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5de42f08-bea6-4365-a6fa-aa8f2bb408e7" (UID: "5de42f08-bea6-4365-a6fa-aa8f2bb408e7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:24.443439 systemd[1]: var-lib-kubelet-pods-5de42f08\x2dbea6\x2d4365\x2da6fa\x2daa8f2bb408e7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:55:24.443610 kubelet[2419]: I0913 00:55:24.443597 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5de42f08-bea6-4365-a6fa-aa8f2bb408e7" (UID: "5de42f08-bea6-4365-a6fa-aa8f2bb408e7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:24.444009 kubelet[2419]: I0913 00:55:24.443991 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cni-path" (OuterVolumeSpecName: "cni-path") pod "5de42f08-bea6-4365-a6fa-aa8f2bb408e7" (UID: "5de42f08-bea6-4365-a6fa-aa8f2bb408e7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:55:24.444546 kubelet[2419]: I0913 00:55:24.444521 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5de42f08-bea6-4365-a6fa-aa8f2bb408e7" (UID: "5de42f08-bea6-4365-a6fa-aa8f2bb408e7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:55:24.446798 systemd[1]: var-lib-kubelet-pods-5de42f08\x2dbea6\x2d4365\x2da6fa\x2daa8f2bb408e7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:55:24.448095 kubelet[2419]: I0913 00:55:24.448068 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5de42f08-bea6-4365-a6fa-aa8f2bb408e7" (UID: "5de42f08-bea6-4365-a6fa-aa8f2bb408e7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:55:24.453134 kubelet[2419]: I0913 00:55:24.453096 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-kube-api-access-skq2n" (OuterVolumeSpecName: "kube-api-access-skq2n") pod "5de42f08-bea6-4365-a6fa-aa8f2bb408e7" (UID: "5de42f08-bea6-4365-a6fa-aa8f2bb408e7"). InnerVolumeSpecName "kube-api-access-skq2n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:55:24.453544 systemd[1]: var-lib-kubelet-pods-5de42f08\x2dbea6\x2d4365\x2da6fa\x2daa8f2bb408e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dskq2n.mount: Deactivated successfully. Sep 13 00:55:24.457165 systemd[1]: var-lib-kubelet-pods-5de42f08\x2dbea6\x2d4365\x2da6fa\x2daa8f2bb408e7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:55:24.459101 kubelet[2419]: I0913 00:55:24.459079 2419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5de42f08-bea6-4365-a6fa-aa8f2bb408e7" (UID: "5de42f08-bea6-4365-a6fa-aa8f2bb408e7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:55:24.536216 kubelet[2419]: I0913 00:55:24.536118 2419 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-bpf-maps\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:24.536216 kubelet[2419]: I0913 00:55:24.536147 2419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-host-proc-sys-net\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:24.536216 kubelet[2419]: I0913 00:55:24.536174 2419 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-etc-cni-netd\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:24.536216 kubelet[2419]: I0913 00:55:24.536197 2419 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-lib-modules\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:24.536216 kubelet[2419]: I0913 00:55:24.536209 2419 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-xtables-lock\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:24.536216 kubelet[2419]: I0913 00:55:24.536220 2419 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cni-path\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:24.536550 kubelet[2419]: I0913 00:55:24.536233 2419 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cilium-config-path\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:24.536550 kubelet[2419]: I0913 00:55:24.536246 2419 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-skq2n\" (UniqueName: \"kubernetes.io/projected/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-kube-api-access-skq2n\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:24.536550 kubelet[2419]: I0913 00:55:24.536257 2419 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:24.536550 kubelet[2419]: I0913 00:55:24.536268 2419 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cilium-run\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:24.536550 kubelet[2419]: I0913 00:55:24.536279 2419 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-hubble-tls\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:24.536550 kubelet[2419]: I0913 00:55:24.536290 2419 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-clustermesh-secrets\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:24.536550 kubelet[2419]: I0913 00:55:24.536303 2419 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-cilium-cgroup\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:24.536550 kubelet[2419]: I0913 00:55:24.536314 2419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5de42f08-bea6-4365-a6fa-aa8f2bb408e7-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-2e01e92296\" DevicePath \"\"" Sep 13 00:55:24.808527 systemd[1]: Removed slice kubepods-burstable-pod5de42f08_bea6_4365_a6fa_aa8f2bb408e7.slice. Sep 13 00:55:25.080115 kubelet[2419]: I0913 00:55:25.079991 2419 setters.go:618] "Node became not ready" node="ci-3510.3.8-n-2e01e92296" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:55:25Z","lastTransitionTime":"2025-09-13T00:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:55:25.246963 kubelet[2419]: I0913 00:55:25.246939 2419 scope.go:117] "RemoveContainer" containerID="2304f231b65b3f76e5c458296d037fcf41059b6856dd7608df248b55d44dab7d" Sep 13 00:55:25.248700 env[1445]: time="2025-09-13T00:55:25.248369504Z" level=info msg="RemoveContainer for \"2304f231b65b3f76e5c458296d037fcf41059b6856dd7608df248b55d44dab7d\"" Sep 13 00:55:25.255034 env[1445]: time="2025-09-13T00:55:25.255000198Z" level=info msg="RemoveContainer for \"2304f231b65b3f76e5c458296d037fcf41059b6856dd7608df248b55d44dab7d\" returns successfully" Sep 13 00:55:25.324062 systemd[1]: Created slice kubepods-burstable-pod5d9fb9c9_8e5d_4710_8e92_d71ff2ca4306.slice. Sep 13 00:55:25.340696 kubelet[2419]: I0913 00:55:25.340614 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306-etc-cni-netd\") pod \"cilium-jdsl6\" (UID: \"5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306\") " pod="kube-system/cilium-jdsl6" Sep 13 00:55:25.340897 kubelet[2419]: I0913 00:55:25.340879 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306-clustermesh-secrets\") pod \"cilium-jdsl6\" (UID: \"5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306\") " pod="kube-system/cilium-jdsl6" Sep 13 00:55:25.341014 kubelet[2419]: I0913 00:55:25.341000 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306-bpf-maps\") pod \"cilium-jdsl6\" (UID: \"5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306\") " pod="kube-system/cilium-jdsl6" Sep 13 00:55:25.341152 kubelet[2419]: I0913 00:55:25.341136 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306-hostproc\") pod \"cilium-jdsl6\" (UID: \"5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306\") " pod="kube-system/cilium-jdsl6" Sep 13 00:55:25.341250 kubelet[2419]: I0913 00:55:25.341237 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306-cilium-cgroup\") pod \"cilium-jdsl6\" (UID: \"5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306\") " pod="kube-system/cilium-jdsl6" Sep 13 00:55:25.341361 kubelet[2419]: I0913 00:55:25.341329 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306-xtables-lock\") pod \"cilium-jdsl6\" (UID: \"5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306\") " pod="kube-system/cilium-jdsl6" Sep 13 00:55:25.341443 kubelet[2419]: I0913 00:55:25.341431 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306-hubble-tls\") pod \"cilium-jdsl6\" (UID: \"5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306\") " pod="kube-system/cilium-jdsl6" Sep 13 00:55:25.341521 kubelet[2419]: I0913 00:55:25.341510 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306-cilium-run\") pod \"cilium-jdsl6\" (UID: \"5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306\") " pod="kube-system/cilium-jdsl6" Sep 13 00:55:25.341605 kubelet[2419]: I0913 00:55:25.341591 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306-host-proc-sys-net\") pod \"cilium-jdsl6\" (UID: \"5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306\") " pod="kube-system/cilium-jdsl6" Sep 13 00:55:25.341692 kubelet[2419]: I0913 00:55:25.341680 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306-cni-path\") pod \"cilium-jdsl6\" (UID: \"5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306\") " pod="kube-system/cilium-jdsl6" Sep 13 00:55:25.343394 kubelet[2419]: I0913 00:55:25.343337 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306-cilium-config-path\") pod \"cilium-jdsl6\" (UID: \"5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306\") " pod="kube-system/cilium-jdsl6" Sep 13 00:55:25.343516 kubelet[2419]: I0913 00:55:25.343505 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306-cilium-ipsec-secrets\") pod \"cilium-jdsl6\" (UID: \"5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306\") " pod="kube-system/cilium-jdsl6" Sep 13 00:55:25.343576 kubelet[2419]: I0913 00:55:25.343568 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306-host-proc-sys-kernel\") pod \"cilium-jdsl6\" (UID: \"5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306\") " pod="kube-system/cilium-jdsl6" Sep 13 00:55:25.343630 kubelet[2419]: I0913 00:55:25.343623 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306-lib-modules\") pod \"cilium-jdsl6\" (UID: \"5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306\") " pod="kube-system/cilium-jdsl6" Sep 13 00:55:25.343684 kubelet[2419]: I0913 00:55:25.343676 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqrm4\" (UniqueName: \"kubernetes.io/projected/5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306-kube-api-access-vqrm4\") pod \"cilium-jdsl6\" (UID: \"5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306\") " pod="kube-system/cilium-jdsl6" Sep 13 00:55:25.627518 env[1445]: time="2025-09-13T00:55:25.627410197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jdsl6,Uid:5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306,Namespace:kube-system,Attempt:0,}" Sep 13 00:55:25.653894 env[1445]: time="2025-09-13T00:55:25.653820375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:25.653894 env[1445]: time="2025-09-13T00:55:25.653858175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:25.653894 env[1445]: time="2025-09-13T00:55:25.653873575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:25.654322 env[1445]: time="2025-09-13T00:55:25.654277175Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ce6347fabe8d24178026f903bb714d3e112da442fff86892f7fb03d55d98dcf pid=4322 runtime=io.containerd.runc.v2 Sep 13 00:55:25.665496 systemd[1]: Started cri-containerd-3ce6347fabe8d24178026f903bb714d3e112da442fff86892f7fb03d55d98dcf.scope. Sep 13 00:55:25.688264 env[1445]: time="2025-09-13T00:55:25.687804448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jdsl6,Uid:5d9fb9c9-8e5d-4710-8e92-d71ff2ca4306,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ce6347fabe8d24178026f903bb714d3e112da442fff86892f7fb03d55d98dcf\"" Sep 13 00:55:25.695249 env[1445]: time="2025-09-13T00:55:25.695215242Z" level=info msg="CreateContainer within sandbox \"3ce6347fabe8d24178026f903bb714d3e112da442fff86892f7fb03d55d98dcf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:55:25.718333 env[1445]: time="2025-09-13T00:55:25.718296423Z" level=info msg="CreateContainer within sandbox \"3ce6347fabe8d24178026f903bb714d3e112da442fff86892f7fb03d55d98dcf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dbcd2209bf59b7b563cb0537eece8dbe4f580f37b54a66883b7e65bd057f2aa6\"" Sep 13 00:55:25.718788 env[1445]: time="2025-09-13T00:55:25.718749523Z" level=info msg="StartContainer for \"dbcd2209bf59b7b563cb0537eece8dbe4f580f37b54a66883b7e65bd057f2aa6\"" Sep 13 00:55:25.740643 systemd[1]: Started cri-containerd-dbcd2209bf59b7b563cb0537eece8dbe4f580f37b54a66883b7e65bd057f2aa6.scope. Sep 13 00:55:25.775083 systemd[1]: cri-containerd-dbcd2209bf59b7b563cb0537eece8dbe4f580f37b54a66883b7e65bd057f2aa6.scope: Deactivated successfully. Sep 13 00:55:25.775521 env[1445]: time="2025-09-13T00:55:25.775474077Z" level=info msg="StartContainer for \"dbcd2209bf59b7b563cb0537eece8dbe4f580f37b54a66883b7e65bd057f2aa6\" returns successfully" Sep 13 00:55:25.822246 env[1445]: time="2025-09-13T00:55:25.822177439Z" level=info msg="shim disconnected" id=dbcd2209bf59b7b563cb0537eece8dbe4f580f37b54a66883b7e65bd057f2aa6 Sep 13 00:55:25.822246 env[1445]: time="2025-09-13T00:55:25.822229139Z" level=warning msg="cleaning up after shim disconnected" id=dbcd2209bf59b7b563cb0537eece8dbe4f580f37b54a66883b7e65bd057f2aa6 namespace=k8s.io Sep 13 00:55:25.822246 env[1445]: time="2025-09-13T00:55:25.822243239Z" level=info msg="cleaning up dead shim" Sep 13 00:55:25.829084 env[1445]: time="2025-09-13T00:55:25.829002833Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4405 runtime=io.containerd.runc.v2\n" Sep 13 00:55:25.892592 kubelet[2419]: E0913 00:55:25.892020 2419 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:55:25.893211 kubelet[2419]: W0913 00:55:25.893174 2419 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5de42f08_bea6_4365_a6fa_aa8f2bb408e7.slice/cri-containerd-8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3.scope WatchSource:0}: container "8add93641e2f64e57405ce725c3968aa29984af2c477dab8229759e190c379a3" in namespace "k8s.io": not found Sep 13 00:55:26.260411 env[1445]: time="2025-09-13T00:55:26.260159197Z" level=info msg="CreateContainer within sandbox \"3ce6347fabe8d24178026f903bb714d3e112da442fff86892f7fb03d55d98dcf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:55:26.287709 env[1445]: time="2025-09-13T00:55:26.287668776Z" level=info msg="CreateContainer within sandbox \"3ce6347fabe8d24178026f903bb714d3e112da442fff86892f7fb03d55d98dcf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"97f42dbf83cd754b2fdfd9d4056a86202d3e27f7b55899afbb8aa2c1f3aa265a\"" Sep 13 00:55:26.288216 env[1445]: time="2025-09-13T00:55:26.288186376Z" level=info msg="StartContainer for \"97f42dbf83cd754b2fdfd9d4056a86202d3e27f7b55899afbb8aa2c1f3aa265a\"" Sep 13 00:55:26.305600 systemd[1]: Started cri-containerd-97f42dbf83cd754b2fdfd9d4056a86202d3e27f7b55899afbb8aa2c1f3aa265a.scope. Sep 13 00:55:26.336264 env[1445]: time="2025-09-13T00:55:26.336063239Z" level=info msg="StartContainer for \"97f42dbf83cd754b2fdfd9d4056a86202d3e27f7b55899afbb8aa2c1f3aa265a\" returns successfully" Sep 13 00:55:26.339948 systemd[1]: cri-containerd-97f42dbf83cd754b2fdfd9d4056a86202d3e27f7b55899afbb8aa2c1f3aa265a.scope: Deactivated successfully. Sep 13 00:55:26.369761 env[1445]: time="2025-09-13T00:55:26.369714214Z" level=info msg="shim disconnected" id=97f42dbf83cd754b2fdfd9d4056a86202d3e27f7b55899afbb8aa2c1f3aa265a Sep 13 00:55:26.370010 env[1445]: time="2025-09-13T00:55:26.369990614Z" level=warning msg="cleaning up after shim disconnected" id=97f42dbf83cd754b2fdfd9d4056a86202d3e27f7b55899afbb8aa2c1f3aa265a namespace=k8s.io Sep 13 00:55:26.370116 env[1445]: time="2025-09-13T00:55:26.370102214Z" level=info msg="cleaning up dead shim" Sep 13 00:55:26.378381 env[1445]: time="2025-09-13T00:55:26.378349007Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4469 runtime=io.containerd.runc.v2\n" Sep 13 00:55:26.805330 kubelet[2419]: I0913 00:55:26.805291 2419 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5de42f08-bea6-4365-a6fa-aa8f2bb408e7" path="/var/lib/kubelet/pods/5de42f08-bea6-4365-a6fa-aa8f2bb408e7/volumes" Sep 13 00:55:27.261638 env[1445]: time="2025-09-13T00:55:27.261343849Z" level=info msg="CreateContainer within sandbox \"3ce6347fabe8d24178026f903bb714d3e112da442fff86892f7fb03d55d98dcf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:55:27.294678 env[1445]: time="2025-09-13T00:55:27.294632426Z" level=info msg="CreateContainer within sandbox \"3ce6347fabe8d24178026f903bb714d3e112da442fff86892f7fb03d55d98dcf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d4e8052b1e9d84957dd104c4d276d738acb0e1ee241c6fec47c1bc1feab74875\"" Sep 13 00:55:27.295373 env[1445]: time="2025-09-13T00:55:27.295339425Z" level=info msg="StartContainer for \"d4e8052b1e9d84957dd104c4d276d738acb0e1ee241c6fec47c1bc1feab74875\"" Sep 13 00:55:27.323324 systemd[1]: Started cri-containerd-d4e8052b1e9d84957dd104c4d276d738acb0e1ee241c6fec47c1bc1feab74875.scope. Sep 13 00:55:27.347376 systemd[1]: cri-containerd-d4e8052b1e9d84957dd104c4d276d738acb0e1ee241c6fec47c1bc1feab74875.scope: Deactivated successfully. Sep 13 00:55:27.349967 env[1445]: time="2025-09-13T00:55:27.349860287Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d9fb9c9_8e5d_4710_8e92_d71ff2ca4306.slice/cri-containerd-d4e8052b1e9d84957dd104c4d276d738acb0e1ee241c6fec47c1bc1feab74875.scope/memory.events\": no such file or directory" Sep 13 00:55:27.357422 env[1445]: time="2025-09-13T00:55:27.357373081Z" level=info msg="StartContainer for \"d4e8052b1e9d84957dd104c4d276d738acb0e1ee241c6fec47c1bc1feab74875\" returns successfully" Sep 13 00:55:27.390939 env[1445]: time="2025-09-13T00:55:27.390887457Z" level=info msg="shim disconnected" id=d4e8052b1e9d84957dd104c4d276d738acb0e1ee241c6fec47c1bc1feab74875 Sep 13 00:55:27.391262 env[1445]: time="2025-09-13T00:55:27.390940857Z" level=warning msg="cleaning up after shim disconnected" id=d4e8052b1e9d84957dd104c4d276d738acb0e1ee241c6fec47c1bc1feab74875 namespace=k8s.io Sep 13 00:55:27.391262 env[1445]: time="2025-09-13T00:55:27.390953357Z" level=info msg="cleaning up dead shim" Sep 13 00:55:27.398282 env[1445]: time="2025-09-13T00:55:27.398244452Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4527 runtime=io.containerd.runc.v2\n" Sep 13 00:55:27.457693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4e8052b1e9d84957dd104c4d276d738acb0e1ee241c6fec47c1bc1feab74875-rootfs.mount: Deactivated successfully. Sep 13 00:55:28.265518 env[1445]: time="2025-09-13T00:55:28.265476749Z" level=info msg="CreateContainer within sandbox \"3ce6347fabe8d24178026f903bb714d3e112da442fff86892f7fb03d55d98dcf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:55:28.300860 env[1445]: time="2025-09-13T00:55:28.300818526Z" level=info msg="CreateContainer within sandbox \"3ce6347fabe8d24178026f903bb714d3e112da442fff86892f7fb03d55d98dcf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4a1eddfab3add74dc4f038b105fb676621d19c8133b226eae0be55afe939790f\"" Sep 13 00:55:28.301531 env[1445]: time="2025-09-13T00:55:28.301491925Z" level=info msg="StartContainer for \"4a1eddfab3add74dc4f038b105fb676621d19c8133b226eae0be55afe939790f\"" Sep 13 00:55:28.326921 systemd[1]: Started cri-containerd-4a1eddfab3add74dc4f038b105fb676621d19c8133b226eae0be55afe939790f.scope. Sep 13 00:55:28.351648 systemd[1]: cri-containerd-4a1eddfab3add74dc4f038b105fb676621d19c8133b226eae0be55afe939790f.scope: Deactivated successfully. Sep 13 00:55:28.355532 env[1445]: time="2025-09-13T00:55:28.355490790Z" level=info msg="StartContainer for \"4a1eddfab3add74dc4f038b105fb676621d19c8133b226eae0be55afe939790f\" returns successfully" Sep 13 00:55:28.387919 env[1445]: time="2025-09-13T00:55:28.387867668Z" level=info msg="shim disconnected" id=4a1eddfab3add74dc4f038b105fb676621d19c8133b226eae0be55afe939790f Sep 13 00:55:28.387919 env[1445]: time="2025-09-13T00:55:28.387919468Z" level=warning msg="cleaning up after shim disconnected" id=4a1eddfab3add74dc4f038b105fb676621d19c8133b226eae0be55afe939790f namespace=k8s.io Sep 13 00:55:28.388220 env[1445]: time="2025-09-13T00:55:28.387931868Z" level=info msg="cleaning up dead shim" Sep 13 00:55:28.394890 env[1445]: time="2025-09-13T00:55:28.394856764Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4583 runtime=io.containerd.runc.v2\n" Sep 13 00:55:28.457733 systemd[1]: run-containerd-runc-k8s.io-4a1eddfab3add74dc4f038b105fb676621d19c8133b226eae0be55afe939790f-runc.JpgR2y.mount: Deactivated successfully. Sep 13 00:55:28.457839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a1eddfab3add74dc4f038b105fb676621d19c8133b226eae0be55afe939790f-rootfs.mount: Deactivated successfully. Sep 13 00:55:29.002379 kubelet[2419]: W0913 00:55:29.002333 2419 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d9fb9c9_8e5d_4710_8e92_d71ff2ca4306.slice/cri-containerd-dbcd2209bf59b7b563cb0537eece8dbe4f580f37b54a66883b7e65bd057f2aa6.scope WatchSource:0}: task dbcd2209bf59b7b563cb0537eece8dbe4f580f37b54a66883b7e65bd057f2aa6 not found Sep 13 00:55:29.270495 env[1445]: time="2025-09-13T00:55:29.270385298Z" level=info msg="CreateContainer within sandbox \"3ce6347fabe8d24178026f903bb714d3e112da442fff86892f7fb03d55d98dcf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:55:29.303010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2363856219.mount: Deactivated successfully. Sep 13 00:55:29.319485 env[1445]: time="2025-09-13T00:55:29.319336668Z" level=info msg="CreateContainer within sandbox \"3ce6347fabe8d24178026f903bb714d3e112da442fff86892f7fb03d55d98dcf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6f4df979d381a13bd287b50ec59b76a23775d19db2f16909023786187f0a0f5c\"" Sep 13 00:55:29.321366 env[1445]: time="2025-09-13T00:55:29.320170467Z" level=info msg="StartContainer for \"6f4df979d381a13bd287b50ec59b76a23775d19db2f16909023786187f0a0f5c\"" Sep 13 00:55:29.341465 systemd[1]: Started cri-containerd-6f4df979d381a13bd287b50ec59b76a23775d19db2f16909023786187f0a0f5c.scope. Sep 13 00:55:29.385244 env[1445]: time="2025-09-13T00:55:29.385199327Z" level=info msg="StartContainer for \"6f4df979d381a13bd287b50ec59b76a23775d19db2f16909023786187f0a0f5c\" returns successfully" Sep 13 00:55:29.765067 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:55:30.283547 kubelet[2419]: I0913 00:55:30.283479 2419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jdsl6" podStartSLOduration=5.28346169 podStartE2EDuration="5.28346169s" podCreationTimestamp="2025-09-13 00:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:55:30.28318269 +0000 UTC m=+199.582482867" watchObservedRunningTime="2025-09-13 00:55:30.28346169 +0000 UTC m=+199.582761867" Sep 13 00:55:30.777023 systemd[1]: run-containerd-runc-k8s.io-6f4df979d381a13bd287b50ec59b76a23775d19db2f16909023786187f0a0f5c-runc.53kb5r.mount: Deactivated successfully. Sep 13 00:55:32.111051 kubelet[2419]: W0913 00:55:32.111000 2419 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d9fb9c9_8e5d_4710_8e92_d71ff2ca4306.slice/cri-containerd-97f42dbf83cd754b2fdfd9d4056a86202d3e27f7b55899afbb8aa2c1f3aa265a.scope WatchSource:0}: task 97f42dbf83cd754b2fdfd9d4056a86202d3e27f7b55899afbb8aa2c1f3aa265a not found Sep 13 00:55:32.430183 systemd-networkd[1608]: lxc_health: Link UP Sep 13 00:55:32.448099 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:55:32.451271 systemd-networkd[1608]: lxc_health: Gained carrier Sep 13 00:55:33.796277 systemd-networkd[1608]: lxc_health: Gained IPv6LL Sep 13 00:55:35.219809 systemd[1]: run-containerd-runc-k8s.io-6f4df979d381a13bd287b50ec59b76a23775d19db2f16909023786187f0a0f5c-runc.CnJTTa.mount: Deactivated successfully. Sep 13 00:55:35.226970 kubelet[2419]: W0913 00:55:35.226926 2419 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d9fb9c9_8e5d_4710_8e92_d71ff2ca4306.slice/cri-containerd-d4e8052b1e9d84957dd104c4d276d738acb0e1ee241c6fec47c1bc1feab74875.scope WatchSource:0}: task d4e8052b1e9d84957dd104c4d276d738acb0e1ee241c6fec47c1bc1feab74875 not found Sep 13 00:55:37.369662 systemd[1]: run-containerd-runc-k8s.io-6f4df979d381a13bd287b50ec59b76a23775d19db2f16909023786187f0a0f5c-runc.SOa9aN.mount: Deactivated successfully. Sep 13 00:55:38.340469 kubelet[2419]: W0913 00:55:38.340430 2419 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d9fb9c9_8e5d_4710_8e92_d71ff2ca4306.slice/cri-containerd-4a1eddfab3add74dc4f038b105fb676621d19c8133b226eae0be55afe939790f.scope WatchSource:0}: task 4a1eddfab3add74dc4f038b105fb676621d19c8133b226eae0be55afe939790f not found Sep 13 00:55:39.477808 systemd[1]: run-containerd-runc-k8s.io-6f4df979d381a13bd287b50ec59b76a23775d19db2f16909023786187f0a0f5c-runc.ZsZnxn.mount: Deactivated successfully. Sep 13 00:55:39.625832 sshd[4265]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:39.628736 systemd[1]: sshd@23-10.200.4.42:22-10.200.16.10:43582.service: Deactivated successfully. Sep 13 00:55:39.629534 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:55:39.630195 systemd-logind[1431]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:55:39.631007 systemd-logind[1431]: Removed session 26.