Sep 6 00:39:47.426532 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 00:39:47.426560 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:39:47.426571 kernel: BIOS-provided physical RAM map: Sep 6 00:39:47.426578 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 6 00:39:47.426594 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 6 00:39:47.426600 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 6 00:39:47.426611 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc4fff] reserved Sep 6 00:39:47.426619 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd1fff] usable Sep 6 00:39:47.426625 kernel: BIOS-e820: [mem 0x000000003ffd2000-0x000000003fffafff] ACPI data Sep 6 00:39:47.426633 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 6 00:39:47.426640 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 6 00:39:47.426646 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 6 00:39:47.426652 kernel: printk: bootconsole [earlyser0] enabled Sep 6 00:39:47.426660 kernel: NX (Execute Disable) protection: active Sep 6 00:39:47.426672 kernel: efi: EFI v2.70 by Microsoft Sep 6 00:39:47.426681 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f33fa98 RNG=0x3ffd2018 Sep 6 00:39:47.426687 kernel: random: crng init done Sep 6 00:39:47.426693 kernel: SMBIOS 3.1.0 present. Sep 6 00:39:47.426703 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 6 00:39:47.426709 kernel: Hypervisor detected: Microsoft Hyper-V Sep 6 00:39:47.426719 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 6 00:39:47.426725 kernel: Hyper-V Host Build:26100-10.0-1-0.1293 Sep 6 00:39:47.426734 kernel: Hyper-V: Nested features: 0x1e0101 Sep 6 00:39:47.426742 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 6 00:39:47.426750 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 6 00:39:47.426756 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 6 00:39:47.426762 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 6 00:39:47.426770 kernel: tsc: Detected 2593.906 MHz processor Sep 6 00:39:47.426779 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 00:39:47.426785 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 00:39:47.426792 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 6 00:39:47.426798 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 00:39:47.426807 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 6 00:39:47.426813 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 6 00:39:47.426822 kernel: Using GB pages for direct mapping Sep 6 00:39:47.426830 kernel: Secure boot disabled Sep 6 00:39:47.426836 kernel: ACPI: Early table checksum verification disabled Sep 6 00:39:47.426846 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 6 00:39:47.426853 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 00:39:47.426859 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 00:39:47.426876 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 6 00:39:47.426883 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 6 00:39:47.426890 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 00:39:47.426896 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 00:39:47.426904 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 00:39:47.426914 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 00:39:47.426924 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 00:39:47.426931 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 00:39:47.426941 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 6 00:39:47.426950 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] Sep 6 00:39:47.426958 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 6 00:39:47.426967 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 6 00:39:47.426974 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 6 00:39:47.426981 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 6 00:39:47.426989 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 6 00:39:47.427000 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 6 00:39:47.427008 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 6 00:39:47.427017 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 6 00:39:47.427024 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 6 00:39:47.427031 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 6 00:39:47.427038 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 6 00:39:47.427048 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 6 00:39:47.427057 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 6 00:39:47.427065 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 6 00:39:47.427075 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 6 00:39:47.427082 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 6 00:39:47.427090 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 6 00:39:47.427099 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 6 00:39:47.427107 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 6 00:39:47.427116 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 6 00:39:47.427123 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 6 00:39:47.427130 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 6 00:39:47.427142 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 6 00:39:47.427149 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 6 00:39:47.427160 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 6 00:39:47.427167 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 6 00:39:47.427174 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 6 00:39:47.427181 kernel: Zone ranges: Sep 6 00:39:47.427191 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 00:39:47.427198 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 6 00:39:47.427208 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 6 00:39:47.427219 kernel: Movable zone start for each node Sep 6 00:39:47.427225 kernel: Early memory node ranges Sep 6 00:39:47.427236 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 6 00:39:47.427243 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 6 00:39:47.427252 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd1fff] Sep 6 00:39:47.427260 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 6 00:39:47.427267 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 6 00:39:47.427274 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 6 00:39:47.427283 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:39:47.427293 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 6 00:39:47.427304 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges Sep 6 00:39:47.427311 kernel: On node 0, zone DMA32: 45 pages in unavailable ranges Sep 6 00:39:47.427317 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 6 00:39:47.427326 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 6 00:39:47.427334 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 6 00:39:47.427342 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 00:39:47.427351 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 00:39:47.427358 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 6 00:39:47.427368 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 6 00:39:47.427378 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 6 00:39:47.427385 kernel: Booting paravirtualized kernel on Hyper-V Sep 6 00:39:47.427392 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 00:39:47.427402 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 6 00:39:47.427409 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 6 00:39:47.427416 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 6 00:39:47.427423 kernel: pcpu-alloc: [0] 0 1 Sep 6 00:39:47.427432 kernel: Hyper-V: PV spinlocks enabled Sep 6 00:39:47.427443 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 6 00:39:47.427452 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062375 Sep 6 00:39:47.427458 kernel: Policy zone: Normal Sep 6 00:39:47.427466 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:39:47.427477 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:39:47.427484 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 6 00:39:47.427494 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:39:47.427501 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:39:47.427511 kernel: Memory: 8069180K/8387512K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 318072K reserved, 0K cma-reserved) Sep 6 00:39:47.427522 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 00:39:47.427541 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 00:39:47.427551 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 00:39:47.427558 kernel: rcu: Hierarchical RCU implementation. Sep 6 00:39:47.427570 kernel: rcu: RCU event tracing is enabled. Sep 6 00:39:47.427578 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 00:39:47.427617 kernel: Rude variant of Tasks RCU enabled. Sep 6 00:39:47.427625 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:39:47.427636 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:39:47.427643 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 00:39:47.427651 kernel: Using NULL legacy PIC Sep 6 00:39:47.427664 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 6 00:39:47.427672 kernel: Console: colour dummy device 80x25 Sep 6 00:39:47.427682 kernel: printk: console [tty1] enabled Sep 6 00:39:47.427689 kernel: printk: console [ttyS0] enabled Sep 6 00:39:47.427697 kernel: printk: bootconsole [earlyser0] disabled Sep 6 00:39:47.427710 kernel: ACPI: Core revision 20210730 Sep 6 00:39:47.427719 kernel: Failed to register legacy timer interrupt Sep 6 00:39:47.427729 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 00:39:47.427736 kernel: Hyper-V: Using IPI hypercalls Sep 6 00:39:47.427744 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Sep 6 00:39:47.427755 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 6 00:39:47.427762 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 6 00:39:47.427773 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 00:39:47.427780 kernel: Spectre V2 : Mitigation: Retpolines Sep 6 00:39:47.427792 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 00:39:47.427803 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 6 00:39:47.427811 kernel: RETBleed: Vulnerable Sep 6 00:39:47.427818 kernel: Speculative Store Bypass: Vulnerable Sep 6 00:39:47.427825 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 6 00:39:47.427832 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 6 00:39:47.427839 kernel: active return thunk: its_return_thunk Sep 6 00:39:47.427846 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 6 00:39:47.427854 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 00:39:47.427864 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 00:39:47.427878 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 00:39:47.427894 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 6 00:39:47.427902 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 6 00:39:47.427909 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 6 00:39:47.427917 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 00:39:47.427930 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 6 00:39:47.427944 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 6 00:39:47.427958 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 6 00:39:47.427972 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 6 00:39:47.427983 kernel: Freeing SMP alternatives memory: 32K Sep 6 00:39:47.427991 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:39:47.427998 kernel: LSM: Security Framework initializing Sep 6 00:39:47.428012 kernel: SELinux: Initializing. Sep 6 00:39:47.428027 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 6 00:39:47.428041 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 6 00:39:47.428055 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 6 00:39:47.428068 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 6 00:39:47.428076 kernel: signal: max sigframe size: 3632 Sep 6 00:39:47.428083 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:39:47.428092 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 6 00:39:47.428106 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:39:47.428120 kernel: x86: Booting SMP configuration: Sep 6 00:39:47.428135 kernel: .... node #0, CPUs: #1 Sep 6 00:39:47.428142 kernel: Transient Scheduler Attacks: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 6 00:39:47.428152 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 6 00:39:47.428166 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 00:39:47.428180 kernel: smpboot: Max logical packages: 1 Sep 6 00:39:47.428192 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Sep 6 00:39:47.428201 kernel: devtmpfs: initialized Sep 6 00:39:47.428208 kernel: x86/mm: Memory block size: 128MB Sep 6 00:39:47.428222 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 6 00:39:47.428237 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:39:47.428249 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 00:39:47.428257 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:39:47.428265 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:39:47.428272 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:39:47.428279 kernel: audit: type=2000 audit(1757119185.025:1): state=initialized audit_enabled=0 res=1 Sep 6 00:39:47.428287 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:39:47.428294 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 00:39:47.428312 kernel: cpuidle: using governor menu Sep 6 00:39:47.428326 kernel: ACPI: bus type PCI registered Sep 6 00:39:47.428340 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:39:47.428349 kernel: dca service started, version 1.12.1 Sep 6 00:39:47.428357 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 00:39:47.428364 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:39:47.428372 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:39:47.428379 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:39:47.428390 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:39:47.428408 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:39:47.428421 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:39:47.428429 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:39:47.428436 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:39:47.428443 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:39:47.428454 kernel: ACPI: Interpreter enabled Sep 6 00:39:47.428468 kernel: ACPI: PM: (supports S0 S5) Sep 6 00:39:47.428482 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 00:39:47.428496 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 00:39:47.428514 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 6 00:39:47.428524 kernel: iommu: Default domain type: Translated Sep 6 00:39:47.428532 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 00:39:47.428539 kernel: vgaarb: loaded Sep 6 00:39:47.428551 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:39:47.428565 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:39:47.428586 kernel: PTP clock support registered Sep 6 00:39:47.428600 kernel: Registered efivars operations Sep 6 00:39:47.428614 kernel: PCI: Using ACPI for IRQ routing Sep 6 00:39:47.428628 kernel: PCI: System does not support PCI Sep 6 00:39:47.428641 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 6 00:39:47.428649 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:39:47.428656 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:39:47.428669 kernel: pnp: PnP ACPI init Sep 6 00:39:47.428683 kernel: pnp: PnP ACPI: found 3 devices Sep 6 00:39:47.428698 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 00:39:47.428711 kernel: NET: Registered PF_INET protocol family Sep 6 00:39:47.428723 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 6 00:39:47.428730 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 6 00:39:47.428747 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:39:47.428761 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:39:47.428774 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Sep 6 00:39:47.428781 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 6 00:39:47.428789 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 6 00:39:47.428801 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 6 00:39:47.428815 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:39:47.428829 kernel: NET: Registered PF_XDP protocol family Sep 6 00:39:47.428843 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:39:47.428850 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 6 00:39:47.428858 kernel: software IO TLB: mapped [mem 0x000000003aa89000-0x000000003ea89000] (64MB) Sep 6 00:39:47.428872 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 6 00:39:47.428886 kernel: Initialise system trusted keyrings Sep 6 00:39:47.428901 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 6 00:39:47.428912 kernel: Key type asymmetric registered Sep 6 00:39:47.428920 kernel: Asymmetric key parser 'x509' registered Sep 6 00:39:47.428928 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:39:47.428947 kernel: io scheduler mq-deadline registered Sep 6 00:39:47.428959 kernel: io scheduler kyber registered Sep 6 00:39:47.428967 kernel: io scheduler bfq registered Sep 6 00:39:47.428974 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 00:39:47.428985 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:39:47.428999 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 00:39:47.429007 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 6 00:39:47.429014 kernel: i8042: PNP: No PS/2 controller found. Sep 6 00:39:47.429198 kernel: rtc_cmos 00:02: registered as rtc0 Sep 6 00:39:47.429312 kernel: rtc_cmos 00:02: setting system clock to 2025-09-06T00:39:46 UTC (1757119186) Sep 6 00:39:47.429418 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 6 00:39:47.429428 kernel: intel_pstate: CPU model not supported Sep 6 00:39:47.429440 kernel: efifb: probing for efifb Sep 6 00:39:47.429453 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 6 00:39:47.429464 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 6 00:39:47.429475 kernel: efifb: scrolling: redraw Sep 6 00:39:47.429486 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 6 00:39:47.429503 kernel: Console: switching to colour frame buffer device 128x48 Sep 6 00:39:47.429514 kernel: fb0: EFI VGA frame buffer device Sep 6 00:39:47.429526 kernel: pstore: Registered efi as persistent store backend Sep 6 00:39:47.429538 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:39:47.429550 kernel: Segment Routing with IPv6 Sep 6 00:39:47.429563 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:39:47.429576 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:39:47.432468 kernel: Key type dns_resolver registered Sep 6 00:39:47.432486 kernel: IPI shorthand broadcast: enabled Sep 6 00:39:47.432508 kernel: sched_clock: Marking stable (926506800, 33706900)->(1209383100, -249169400) Sep 6 00:39:47.432523 kernel: registered taskstats version 1 Sep 6 00:39:47.432536 kernel: Loading compiled-in X.509 certificates Sep 6 00:39:47.432551 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 00:39:47.432564 kernel: Key type .fscrypt registered Sep 6 00:39:47.432578 kernel: Key type fscrypt-provisioning registered Sep 6 00:39:47.432609 kernel: pstore: Using crash dump compression: deflate Sep 6 00:39:47.432623 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:39:47.432637 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:39:47.432655 kernel: ima: No architecture policies found Sep 6 00:39:47.432669 kernel: clk: Disabling unused clocks Sep 6 00:39:47.432683 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 00:39:47.432697 kernel: Write protecting the kernel read-only data: 28672k Sep 6 00:39:47.432711 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 00:39:47.432725 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 00:39:47.432739 kernel: Run /init as init process Sep 6 00:39:47.432753 kernel: with arguments: Sep 6 00:39:47.432767 kernel: /init Sep 6 00:39:47.432783 kernel: with environment: Sep 6 00:39:47.432796 kernel: HOME=/ Sep 6 00:39:47.432810 kernel: TERM=linux Sep 6 00:39:47.432823 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:39:47.432842 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:39:47.432859 systemd[1]: Detected virtualization microsoft. Sep 6 00:39:47.432875 systemd[1]: Detected architecture x86-64. Sep 6 00:39:47.432887 systemd[1]: Running in initrd. Sep 6 00:39:47.432904 systemd[1]: No hostname configured, using default hostname. Sep 6 00:39:47.432918 systemd[1]: Hostname set to . Sep 6 00:39:47.432934 systemd[1]: Initializing machine ID from random generator. Sep 6 00:39:47.432948 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:39:47.432962 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:39:47.432977 systemd[1]: Reached target cryptsetup.target. Sep 6 00:39:47.432991 systemd[1]: Reached target paths.target. Sep 6 00:39:47.433005 systemd[1]: Reached target slices.target. Sep 6 00:39:47.433022 systemd[1]: Reached target swap.target. Sep 6 00:39:47.433037 systemd[1]: Reached target timers.target. Sep 6 00:39:47.433052 systemd[1]: Listening on iscsid.socket. Sep 6 00:39:47.433067 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:39:47.433082 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:39:47.433096 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:39:47.433110 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:39:47.433128 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:39:47.433143 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:39:47.433158 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:39:47.433172 systemd[1]: Reached target sockets.target. Sep 6 00:39:47.433187 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:39:47.433201 systemd[1]: Finished network-cleanup.service. Sep 6 00:39:47.433216 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:39:47.433231 systemd[1]: Starting systemd-journald.service... Sep 6 00:39:47.433245 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:39:47.433263 systemd[1]: Starting systemd-resolved.service... Sep 6 00:39:47.433277 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:39:47.433298 systemd-journald[183]: Journal started Sep 6 00:39:47.433379 systemd-journald[183]: Runtime Journal (/run/log/journal/9aae3dbb3f7c48bdac6dd4eb74dc99c0) is 8.0M, max 159.0M, 151.0M free. Sep 6 00:39:47.433899 systemd-modules-load[184]: Inserted module 'overlay' Sep 6 00:39:47.451929 systemd[1]: Started systemd-journald.service. Sep 6 00:39:47.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.476348 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:39:47.484278 kernel: audit: type=1130 audit(1757119187.453:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.484672 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:39:47.496600 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:39:47.501059 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:39:47.510118 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 6 00:39:47.513053 kernel: Bridge firewalling registered Sep 6 00:39:47.514448 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:39:47.525524 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:39:47.536243 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:39:47.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.557240 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:39:47.568721 kernel: audit: type=1130 audit(1757119187.484:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.565678 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:39:47.573363 systemd-resolved[185]: Positive Trust Anchors: Sep 6 00:39:47.573379 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:39:47.573431 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:39:47.600529 dracut-cmdline[200]: dracut-dracut-053 Sep 6 00:39:47.605479 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:39:47.605350 systemd-resolved[185]: Defaulting to hostname 'linux'. Sep 6 00:39:47.627545 systemd[1]: Started systemd-resolved.service. Sep 6 00:39:47.632530 systemd[1]: Reached target nss-lookup.target. Sep 6 00:39:47.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.654423 kernel: audit: type=1130 audit(1757119187.500:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.654495 kernel: SCSI subsystem initialized Sep 6 00:39:47.654509 kernel: audit: type=1130 audit(1757119187.510:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.691116 kernel: audit: type=1130 audit(1757119187.538:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.691190 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:39:47.691205 kernel: audit: type=1130 audit(1757119187.562:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.711346 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:39:47.711412 kernel: audit: type=1130 audit(1757119187.631:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.731785 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:39:47.735232 systemd-modules-load[184]: Inserted module 'dm_multipath' Sep 6 00:39:47.739037 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:39:47.745524 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:39:47.765280 kernel: audit: type=1130 audit(1757119187.744:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.766411 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:39:47.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.786611 kernel: audit: type=1130 audit(1757119187.771:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.794600 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:39:47.815614 kernel: iscsi: registered transport (tcp) Sep 6 00:39:47.843380 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:39:47.843473 kernel: QLogic iSCSI HBA Driver Sep 6 00:39:47.875819 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:39:47.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:47.881797 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:39:47.934615 kernel: raid6: avx512x4 gen() 26041 MB/s Sep 6 00:39:47.954597 kernel: raid6: avx512x4 xor() 5882 MB/s Sep 6 00:39:47.974593 kernel: raid6: avx512x2 gen() 26720 MB/s Sep 6 00:39:47.995596 kernel: raid6: avx512x2 xor() 29739 MB/s Sep 6 00:39:48.015593 kernel: raid6: avx512x1 gen() 26871 MB/s Sep 6 00:39:48.035595 kernel: raid6: avx512x1 xor() 26686 MB/s Sep 6 00:39:48.056595 kernel: raid6: avx2x4 gen() 20910 MB/s Sep 6 00:39:48.076593 kernel: raid6: avx2x4 xor() 6003 MB/s Sep 6 00:39:48.096592 kernel: raid6: avx2x2 gen() 23520 MB/s Sep 6 00:39:48.117596 kernel: raid6: avx2x2 xor() 22007 MB/s Sep 6 00:39:48.137595 kernel: raid6: avx2x1 gen() 20509 MB/s Sep 6 00:39:48.157594 kernel: raid6: avx2x1 xor() 18853 MB/s Sep 6 00:39:48.178595 kernel: raid6: sse2x4 gen() 10064 MB/s Sep 6 00:39:48.198593 kernel: raid6: sse2x4 xor() 6411 MB/s Sep 6 00:39:48.218592 kernel: raid6: sse2x2 gen() 11219 MB/s Sep 6 00:39:48.239594 kernel: raid6: sse2x2 xor() 7215 MB/s Sep 6 00:39:48.259594 kernel: raid6: sse2x1 gen() 10309 MB/s Sep 6 00:39:48.283718 kernel: raid6: sse2x1 xor() 5823 MB/s Sep 6 00:39:48.283741 kernel: raid6: using algorithm avx512x1 gen() 26871 MB/s Sep 6 00:39:48.283751 kernel: raid6: .... xor() 26686 MB/s, rmw enabled Sep 6 00:39:48.287457 kernel: raid6: using avx512x2 recovery algorithm Sep 6 00:39:48.307601 kernel: xor: automatically using best checksumming function avx Sep 6 00:39:48.407614 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 00:39:48.416940 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:39:48.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:48.421000 audit: BPF prog-id=7 op=LOAD Sep 6 00:39:48.421000 audit: BPF prog-id=8 op=LOAD Sep 6 00:39:48.422821 systemd[1]: Starting systemd-udevd.service... Sep 6 00:39:48.439759 systemd-udevd[383]: Using default interface naming scheme 'v252'. Sep 6 00:39:48.447233 systemd[1]: Started systemd-udevd.service. Sep 6 00:39:48.453487 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:39:48.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:48.473816 dracut-pre-trigger[391]: rd.md=0: removing MD RAID activation Sep 6 00:39:48.506273 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:39:48.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:48.510171 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:39:48.551969 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:39:48.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:48.601618 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:39:48.613612 kernel: hv_vmbus: Vmbus version:5.2 Sep 6 00:39:48.637609 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 6 00:39:48.650613 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 6 00:39:48.674604 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 00:39:48.683606 kernel: hv_vmbus: registering driver hv_storvsc Sep 6 00:39:48.683650 kernel: AES CTR mode by8 optimization enabled Sep 6 00:39:48.697519 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 00:39:48.697571 kernel: hv_vmbus: registering driver hv_netvsc Sep 6 00:39:48.701196 kernel: scsi host1: storvsc_host_t Sep 6 00:39:48.703709 kernel: scsi host0: storvsc_host_t Sep 6 00:39:48.714118 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 6 00:39:48.721118 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 6 00:39:48.731601 kernel: hv_vmbus: registering driver hid_hyperv Sep 6 00:39:48.751099 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 6 00:39:48.751150 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 6 00:39:48.763426 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 6 00:39:48.772022 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 6 00:39:48.772043 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 6 00:39:48.792206 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 6 00:39:48.792395 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 6 00:39:48.792560 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 6 00:39:48.792743 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 6 00:39:48.792903 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 6 00:39:48.793061 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:39:48.793081 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 6 00:39:48.822605 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 6 00:39:48.847606 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#84 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 6 00:39:48.875607 kernel: hv_netvsc 7c1e5276-d9a0-7c1e-5276-d9a07c1e5276 eth0: VF slot 1 added Sep 6 00:39:48.884606 kernel: hv_vmbus: registering driver hv_pci Sep 6 00:39:48.893604 kernel: hv_pci a62e9c82-8849-4cac-bd9b-db8741d71f92: PCI VMBus probing: Using version 0x10004 Sep 6 00:39:48.959793 kernel: hv_pci a62e9c82-8849-4cac-bd9b-db8741d71f92: PCI host bridge to bus 8849:00 Sep 6 00:39:48.959993 kernel: pci_bus 8849:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 6 00:39:48.960173 kernel: pci_bus 8849:00: No busn resource found for root bus, will use [bus 00-ff] Sep 6 00:39:48.960343 kernel: pci 8849:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 6 00:39:48.960532 kernel: pci 8849:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 6 00:39:48.960713 kernel: pci 8849:00:02.0: enabling Extended Tags Sep 6 00:39:48.960870 kernel: pci 8849:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8849:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 6 00:39:48.961033 kernel: pci_bus 8849:00: busn_res: [bus 00-ff] end is updated to 00 Sep 6 00:39:48.961183 kernel: pci 8849:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 6 00:39:49.059403 kernel: mlx5_core 8849:00:02.0: enabling device (0000 -> 0002) Sep 6 00:39:49.351133 kernel: mlx5_core 8849:00:02.0: firmware version: 14.30.5000 Sep 6 00:39:49.351337 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (431) Sep 6 00:39:49.351356 kernel: mlx5_core 8849:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Sep 6 00:39:49.351513 kernel: mlx5_core 8849:00:02.0: Supported tc offload range - chains: 1, prios: 1 Sep 6 00:39:49.351664 kernel: mlx5_core 8849:00:02.0: mlx5e_tc_post_act_init:40:(pid 16): firmware level support is missing Sep 6 00:39:49.351767 kernel: hv_netvsc 7c1e5276-d9a0-7c1e-5276-d9a07c1e5276 eth0: VF registering: eth1 Sep 6 00:39:49.351868 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:39:49.351879 kernel: mlx5_core 8849:00:02.0 eth1: joined to eth0 Sep 6 00:39:49.351985 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:39:49.177443 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:39:49.201620 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:39:49.296404 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:39:49.300192 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:39:49.304417 systemd[1]: Starting disk-uuid.service... Sep 6 00:39:49.325511 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:39:49.385609 kernel: mlx5_core 8849:00:02.0 enP34889s1: renamed from eth1 Sep 6 00:39:50.340617 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 00:39:50.340847 disk-uuid[551]: The operation has completed successfully. Sep 6 00:39:50.417567 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:39:50.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:50.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:50.417721 systemd[1]: Finished disk-uuid.service. Sep 6 00:39:50.431333 systemd[1]: Starting verity-setup.service... Sep 6 00:39:50.468605 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 6 00:39:50.634854 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:39:50.641002 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:39:50.645550 systemd[1]: Finished verity-setup.service. Sep 6 00:39:50.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:50.723611 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:39:50.723779 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:39:50.725217 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:39:50.762951 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:39:50.762998 kernel: BTRFS info (device sda6): using free space tree Sep 6 00:39:50.763017 kernel: BTRFS info (device sda6): has skinny extents Sep 6 00:39:50.726690 systemd[1]: Starting ignition-setup.service... Sep 6 00:39:50.732685 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:39:50.822678 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:39:50.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:50.827000 audit: BPF prog-id=9 op=LOAD Sep 6 00:39:50.828944 systemd[1]: Starting systemd-networkd.service... Sep 6 00:39:50.858251 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:39:50.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:50.859720 systemd-networkd[822]: lo: Link UP Sep 6 00:39:50.859725 systemd-networkd[822]: lo: Gained carrier Sep 6 00:39:50.860271 systemd-networkd[822]: Enumeration completed Sep 6 00:39:50.860358 systemd[1]: Started systemd-networkd.service. Sep 6 00:39:50.864118 systemd[1]: Reached target network.target. Sep 6 00:39:50.866956 systemd-networkd[822]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:39:50.883490 systemd[1]: Starting iscsiuio.service... Sep 6 00:39:50.899530 systemd[1]: Started iscsiuio.service. Sep 6 00:39:50.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:50.902198 systemd[1]: Starting iscsid.service... Sep 6 00:39:50.907456 iscsid[831]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:39:50.907456 iscsid[831]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 6 00:39:50.907456 iscsid[831]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:39:50.907456 iscsid[831]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:39:50.907456 iscsid[831]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:39:50.907456 iscsid[831]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:39:50.907456 iscsid[831]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:39:50.941588 kernel: mlx5_core 8849:00:02.0 enP34889s1: Link up Sep 6 00:39:50.938935 systemd[1]: Started iscsid.service. Sep 6 00:39:50.960653 kernel: hv_netvsc 7c1e5276-d9a0-7c1e-5276-d9a07c1e5276 eth0: Data path switched to VF: enP34889s1 Sep 6 00:39:50.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:50.967600 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:39:50.967651 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:39:50.974938 systemd-networkd[822]: enP34889s1: Link UP Sep 6 00:39:50.975085 systemd-networkd[822]: eth0: Link UP Sep 6 00:39:50.975305 systemd-networkd[822]: eth0: Gained carrier Sep 6 00:39:50.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:50.978370 systemd[1]: Finished ignition-setup.service. Sep 6 00:39:50.983785 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:39:50.990887 systemd-networkd[822]: enP34889s1: Gained carrier Sep 6 00:39:50.995130 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:39:51.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:51.005668 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:39:51.006025 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:39:51.006568 systemd[1]: Reached target remote-fs.target. Sep 6 00:39:51.015520 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:39:51.020379 systemd-networkd[822]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 6 00:39:51.041442 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:39:51.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:52.518853 systemd-networkd[822]: eth0: Gained IPv6LL Sep 6 00:39:53.390420 ignition[835]: Ignition 2.14.0 Sep 6 00:39:53.390439 ignition[835]: Stage: fetch-offline Sep 6 00:39:53.390538 ignition[835]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:39:53.390616 ignition[835]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 00:39:53.432756 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 00:39:53.480273 ignition[835]: parsed url from cmdline: "" Sep 6 00:39:53.480290 ignition[835]: no config URL provided Sep 6 00:39:53.480306 ignition[835]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:39:53.480325 ignition[835]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:39:53.480334 ignition[835]: failed to fetch config: resource requires networking Sep 6 00:39:53.488800 ignition[835]: Ignition finished successfully Sep 6 00:39:53.495285 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:39:53.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:53.499705 systemd[1]: Starting ignition-fetch.service... Sep 6 00:39:53.521560 kernel: kauditd_printk_skb: 18 callbacks suppressed Sep 6 00:39:53.521613 kernel: audit: type=1130 audit(1757119193.497:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:53.531129 ignition[852]: Ignition 2.14.0 Sep 6 00:39:53.532522 ignition[852]: Stage: fetch Sep 6 00:39:53.532680 ignition[852]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:39:53.532711 ignition[852]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 00:39:53.536723 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 00:39:53.536936 ignition[852]: parsed url from cmdline: "" Sep 6 00:39:53.536939 ignition[852]: no config URL provided Sep 6 00:39:53.536945 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:39:53.536956 ignition[852]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:39:53.536994 ignition[852]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 6 00:39:53.615880 ignition[852]: GET result: OK Sep 6 00:39:53.616055 ignition[852]: config has been read from IMDS userdata Sep 6 00:39:53.616089 ignition[852]: parsing config with SHA512: a3b0fadc2b37dfd9e2133df342f22156f70e5dfc5712b06fecbb786eebf5ea7311b7f0d6bb99421e5de8cbc6a8576522e5a747c41857b896d92d896c623913d1 Sep 6 00:39:53.620109 unknown[852]: fetched base config from "system" Sep 6 00:39:53.620729 ignition[852]: fetch: fetch complete Sep 6 00:39:53.651746 kernel: audit: type=1130 audit(1757119193.626:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:53.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:53.620122 unknown[852]: fetched base config from "system" Sep 6 00:39:53.620734 ignition[852]: fetch: fetch passed Sep 6 00:39:53.620134 unknown[852]: fetched user config from "azure" Sep 6 00:39:53.620783 ignition[852]: Ignition finished successfully Sep 6 00:39:53.622651 systemd[1]: Finished ignition-fetch.service. Sep 6 00:39:53.628120 systemd[1]: Starting ignition-kargs.service... Sep 6 00:39:53.673409 ignition[858]: Ignition 2.14.0 Sep 6 00:39:53.673421 ignition[858]: Stage: kargs Sep 6 00:39:53.673593 ignition[858]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:39:53.673631 ignition[858]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 00:39:53.685970 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 00:39:53.690573 ignition[858]: kargs: kargs passed Sep 6 00:39:53.690675 ignition[858]: Ignition finished successfully Sep 6 00:39:53.695941 systemd[1]: Finished ignition-kargs.service. Sep 6 00:39:53.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:53.716902 kernel: audit: type=1130 audit(1757119193.698:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:53.714671 systemd[1]: Starting ignition-disks.service... Sep 6 00:39:53.724129 ignition[864]: Ignition 2.14.0 Sep 6 00:39:53.724141 ignition[864]: Stage: disks Sep 6 00:39:53.724295 ignition[864]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:39:53.724328 ignition[864]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 00:39:53.727794 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 00:39:53.731788 ignition[864]: disks: disks passed Sep 6 00:39:53.731851 ignition[864]: Ignition finished successfully Sep 6 00:39:53.740153 systemd[1]: Finished ignition-disks.service. Sep 6 00:39:53.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:53.745353 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:39:53.762830 kernel: audit: type=1130 audit(1757119193.744:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:53.767348 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:39:53.772415 systemd[1]: Reached target local-fs.target. Sep 6 00:39:53.774721 systemd[1]: Reached target sysinit.target. Sep 6 00:39:53.779294 systemd[1]: Reached target basic.target. Sep 6 00:39:53.782728 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:39:53.837679 systemd-fsck[872]: ROOT: clean, 629/7326000 files, 481084/7359488 blocks Sep 6 00:39:53.844373 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:39:53.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:53.850657 systemd[1]: Mounting sysroot.mount... Sep 6 00:39:53.871318 kernel: audit: type=1130 audit(1757119193.848:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:53.884600 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:39:53.885277 systemd[1]: Mounted sysroot.mount. Sep 6 00:39:53.889823 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:39:53.920483 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:39:53.926931 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 6 00:39:53.933119 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:39:53.933167 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:39:53.944889 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:39:53.988498 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:39:53.993456 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:39:54.011730 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (882) Sep 6 00:39:54.017754 initrd-setup-root[887]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:39:54.029429 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:39:54.029458 kernel: BTRFS info (device sda6): using free space tree Sep 6 00:39:54.029469 kernel: BTRFS info (device sda6): has skinny extents Sep 6 00:39:54.032632 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:39:54.043605 initrd-setup-root[913]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:39:54.063129 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:39:54.070523 initrd-setup-root[929]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:39:54.440632 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:39:54.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:54.458625 kernel: audit: type=1130 audit(1757119194.443:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:54.459839 systemd[1]: Starting ignition-mount.service... Sep 6 00:39:54.465866 systemd[1]: Starting sysroot-boot.service... Sep 6 00:39:54.478844 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 6 00:39:54.478992 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 6 00:39:54.499266 systemd[1]: Finished sysroot-boot.service. Sep 6 00:39:54.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:54.522602 kernel: audit: type=1130 audit(1757119194.506:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:54.523114 ignition[948]: INFO : Ignition 2.14.0 Sep 6 00:39:54.523114 ignition[948]: INFO : Stage: mount Sep 6 00:39:54.528215 ignition[948]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:39:54.528215 ignition[948]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 00:39:54.555940 kernel: audit: type=1130 audit(1757119194.536:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:54.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:54.556020 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 00:39:54.556020 ignition[948]: INFO : mount: mount passed Sep 6 00:39:54.556020 ignition[948]: INFO : Ignition finished successfully Sep 6 00:39:54.531772 systemd[1]: Finished ignition-mount.service. Sep 6 00:39:55.130928 coreos-metadata[881]: Sep 06 00:39:55.130 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 6 00:39:55.149531 coreos-metadata[881]: Sep 06 00:39:55.149 INFO Fetch successful Sep 6 00:39:55.186387 coreos-metadata[881]: Sep 06 00:39:55.186 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 6 00:39:55.196124 coreos-metadata[881]: Sep 06 00:39:55.195 INFO Fetch successful Sep 6 00:39:55.209240 coreos-metadata[881]: Sep 06 00:39:55.209 INFO wrote hostname ci-3510.3.8-n-cde0707216 to /sysroot/etc/hostname Sep 6 00:39:55.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:55.211630 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 6 00:39:55.237941 kernel: audit: type=1130 audit(1757119195.216:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:55.218921 systemd[1]: Starting ignition-files.service... Sep 6 00:39:55.241555 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:39:55.258764 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (961) Sep 6 00:39:55.258812 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:39:55.268588 kernel: BTRFS info (device sda6): using free space tree Sep 6 00:39:55.268614 kernel: BTRFS info (device sda6): has skinny extents Sep 6 00:39:55.278238 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:39:55.295239 ignition[980]: INFO : Ignition 2.14.0 Sep 6 00:39:55.295239 ignition[980]: INFO : Stage: files Sep 6 00:39:55.299722 ignition[980]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:39:55.299722 ignition[980]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 00:39:55.322597 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 00:39:55.340142 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:39:55.344006 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:39:55.344006 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:39:55.411094 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:39:55.417537 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:39:55.417537 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:39:55.417537 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:39:55.417537 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 6 00:39:55.416437 unknown[980]: wrote ssh authorized keys file for user: core Sep 6 00:39:55.487950 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 6 00:39:55.663186 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:39:55.669608 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:39:55.669608 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 6 00:39:55.869390 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:39:56.010980 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:39:56.010980 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:39:56.023044 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:39:56.023044 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:39:56.023044 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:39:56.023044 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:39:56.023044 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:39:56.023044 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:39:56.023044 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:39:56.023044 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:39:56.023044 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:39:56.023044 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:39:56.023044 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:39:56.023044 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 6 00:39:56.023044 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:39:56.104373 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1836325664" Sep 6 00:39:56.104373 ignition[980]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1836325664": device or resource busy Sep 6 00:39:56.104373 ignition[980]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1836325664", trying btrfs: device or resource busy Sep 6 00:39:56.104373 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1836325664" Sep 6 00:39:56.104373 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1836325664" Sep 6 00:39:56.104373 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1836325664" Sep 6 00:39:56.104373 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1836325664" Sep 6 00:39:56.104373 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 6 00:39:56.104373 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 00:39:56.104373 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:39:56.104373 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2466897850" Sep 6 00:39:56.104373 ignition[980]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2466897850": device or resource busy Sep 6 00:39:56.104373 ignition[980]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2466897850", trying btrfs: device or resource busy Sep 6 00:39:56.104373 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2466897850" Sep 6 00:39:56.060593 systemd[1]: mnt-oem1836325664.mount: Deactivated successfully. Sep 6 00:39:56.204950 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2466897850" Sep 6 00:39:56.204950 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem2466897850" Sep 6 00:39:56.204950 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem2466897850" Sep 6 00:39:56.204950 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 00:39:56.204950 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:39:56.204950 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 6 00:39:56.075750 systemd[1]: mnt-oem2466897850.mount: Deactivated successfully. Sep 6 00:39:56.577018 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Sep 6 00:39:56.949006 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:39:56.949006 ignition[980]: INFO : files: op(14): [started] processing unit "waagent.service" Sep 6 00:39:56.949006 ignition[980]: INFO : files: op(14): [finished] processing unit "waagent.service" Sep 6 00:39:56.949006 ignition[980]: INFO : files: op(15): [started] processing unit "nvidia.service" Sep 6 00:39:56.949006 ignition[980]: INFO : files: op(15): [finished] processing unit "nvidia.service" Sep 6 00:39:56.949006 ignition[980]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Sep 6 00:39:56.981922 ignition[980]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:39:56.981922 ignition[980]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:39:56.981922 ignition[980]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Sep 6 00:39:56.981922 ignition[980]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Sep 6 00:39:56.981922 ignition[980]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Sep 6 00:39:56.981922 ignition[980]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:39:56.981922 ignition[980]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:39:56.981922 ignition[980]: INFO : files: op(1a): [started] setting preset to enabled for "waagent.service" Sep 6 00:39:56.981922 ignition[980]: INFO : files: op(1a): [finished] setting preset to enabled for "waagent.service" Sep 6 00:39:56.981922 ignition[980]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:39:56.981922 ignition[980]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:39:56.981922 ignition[980]: INFO : files: files passed Sep 6 00:39:56.981922 ignition[980]: INFO : Ignition finished successfully Sep 6 00:39:57.058506 kernel: audit: type=1130 audit(1757119196.994:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:56.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:56.975172 systemd[1]: Finished ignition-files.service. Sep 6 00:39:57.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:56.996708 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:39:57.021503 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:39:57.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.087524 initrd-setup-root-after-ignition[1005]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:39:57.036839 systemd[1]: Starting ignition-quench.service... Sep 6 00:39:57.055021 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:39:57.061453 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:39:57.061558 systemd[1]: Finished ignition-quench.service. Sep 6 00:39:57.061830 systemd[1]: Reached target ignition-complete.target. Sep 6 00:39:57.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.063194 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:39:57.084103 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:39:57.084204 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:39:57.087537 systemd[1]: Reached target initrd-fs.target. Sep 6 00:39:57.090050 systemd[1]: Reached target initrd.target. Sep 6 00:39:57.092462 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:39:57.093245 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:39:57.110066 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:39:57.139177 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:39:57.150618 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:39:57.156022 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:39:57.161608 systemd[1]: Stopped target timers.target. Sep 6 00:39:57.166225 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:39:57.166395 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:39:57.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.172431 systemd[1]: Stopped target initrd.target. Sep 6 00:39:57.176674 systemd[1]: Stopped target basic.target. Sep 6 00:39:57.186242 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:39:57.191713 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:39:57.194806 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:39:57.200338 systemd[1]: Stopped target remote-fs.target. Sep 6 00:39:57.205382 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:39:57.208155 systemd[1]: Stopped target sysinit.target. Sep 6 00:39:57.212759 systemd[1]: Stopped target local-fs.target. Sep 6 00:39:57.217619 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:39:57.222266 systemd[1]: Stopped target swap.target. Sep 6 00:39:57.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.226620 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:39:57.226786 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:39:57.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.231868 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:39:57.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.236354 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:39:57.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.236516 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:39:57.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.241925 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:39:57.273795 iscsid[831]: iscsid shutting down. Sep 6 00:39:57.242062 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:39:57.247387 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:39:57.282314 ignition[1018]: INFO : Ignition 2.14.0 Sep 6 00:39:57.282314 ignition[1018]: INFO : Stage: umount Sep 6 00:39:57.282314 ignition[1018]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:39:57.282314 ignition[1018]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 00:39:57.247517 systemd[1]: Stopped ignition-files.service. Sep 6 00:39:57.308258 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 00:39:57.308258 ignition[1018]: INFO : umount: umount passed Sep 6 00:39:57.308258 ignition[1018]: INFO : Ignition finished successfully Sep 6 00:39:57.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.252477 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 6 00:39:57.252633 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 6 00:39:57.259332 systemd[1]: Stopping ignition-mount.service... Sep 6 00:39:57.271365 systemd[1]: Stopping iscsid.service... Sep 6 00:39:57.303440 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:39:57.303743 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:39:57.309640 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:39:57.313947 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:39:57.314117 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:39:57.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.317225 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:39:57.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.317373 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:39:57.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.335602 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:39:57.342907 systemd[1]: Stopped iscsid.service. Sep 6 00:39:57.357653 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:39:57.358560 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:39:57.358715 systemd[1]: Stopped ignition-mount.service. Sep 6 00:39:57.362370 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:39:57.362497 systemd[1]: Stopped ignition-disks.service. Sep 6 00:39:57.365365 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:39:57.365419 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:39:57.370336 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 00:39:57.370391 systemd[1]: Stopped ignition-fetch.service. Sep 6 00:39:57.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.399773 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:39:57.399861 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:39:57.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.410954 systemd[1]: Stopped target paths.target. Sep 6 00:39:57.415666 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:39:57.421643 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:39:57.426743 systemd[1]: Stopped target slices.target. Sep 6 00:39:57.431606 systemd[1]: Stopped target sockets.target. Sep 6 00:39:57.436258 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:39:57.436329 systemd[1]: Closed iscsid.socket. Sep 6 00:39:57.442702 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:39:57.445345 systemd[1]: Stopped ignition-setup.service. Sep 6 00:39:57.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.450192 systemd[1]: Stopping iscsiuio.service... Sep 6 00:39:57.454970 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:39:57.457458 systemd[1]: Stopped iscsiuio.service. Sep 6 00:39:57.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.461853 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:39:57.464712 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:39:57.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.470180 systemd[1]: Stopped target network.target. Sep 6 00:39:57.474529 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:39:57.474595 systemd[1]: Closed iscsiuio.socket. Sep 6 00:39:57.481167 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:39:57.483609 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:39:57.493686 systemd-networkd[822]: eth0: DHCPv6 lease lost Sep 6 00:39:57.497155 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:39:57.500257 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:39:57.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.505366 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:39:57.507000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:39:57.505409 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:39:57.513418 systemd[1]: Stopping network-cleanup.service... Sep 6 00:39:57.522443 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:39:57.522528 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:39:57.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.530475 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:39:57.530538 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:39:57.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.537772 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:39:57.537830 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:39:57.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.543634 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:39:57.552615 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:39:57.553182 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:39:57.555694 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:39:57.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.567714 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:39:57.567883 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:39:57.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.576000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:39:57.578487 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:39:57.578654 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:39:57.587206 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:39:57.587257 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:39:57.595177 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:39:57.595247 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:39:57.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.602477 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:39:57.602537 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:39:57.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.607556 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:39:57.607616 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:39:57.613169 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:39:57.632340 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:39:57.640134 kernel: hv_netvsc 7c1e5276-d9a0-7c1e-5276-d9a07c1e5276 eth0: Data path switched from VF: enP34889s1 Sep 6 00:39:57.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.632445 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:39:57.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:57.640695 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:39:57.640815 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:39:57.662847 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:39:57.665770 systemd[1]: Stopped network-cleanup.service. Sep 6 00:39:57.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:58.028298 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:39:58.028447 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:39:58.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:58.036452 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:39:58.041753 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:39:58.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:39:58.041843 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:39:58.048345 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:39:58.062781 systemd[1]: Switching root. Sep 6 00:39:58.089005 systemd-journald[183]: Journal stopped Sep 6 00:40:11.704675 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 6 00:40:11.704707 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:40:11.704719 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:40:11.704735 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:40:11.704744 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:40:11.704753 kernel: SELinux: policy capability open_perms=1 Sep 6 00:40:11.704768 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:40:11.704777 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:40:11.704788 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:40:11.704796 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:40:11.704805 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:40:11.704816 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:40:11.704823 kernel: kauditd_printk_skb: 42 callbacks suppressed Sep 6 00:40:11.704832 kernel: audit: type=1403 audit(1757119200.445:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:40:11.704848 systemd[1]: Successfully loaded SELinux policy in 341.248ms. Sep 6 00:40:11.704858 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.454ms. Sep 6 00:40:11.704869 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:40:11.704878 systemd[1]: Detected virtualization microsoft. Sep 6 00:40:11.704893 systemd[1]: Detected architecture x86-64. Sep 6 00:40:11.704903 systemd[1]: Detected first boot. Sep 6 00:40:11.704916 systemd[1]: Hostname set to . Sep 6 00:40:11.704927 systemd[1]: Initializing machine ID from random generator. Sep 6 00:40:11.704939 kernel: audit: type=1400 audit(1757119201.060:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:40:11.704948 kernel: audit: type=1400 audit(1757119201.060:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:40:11.704962 kernel: audit: type=1334 audit(1757119201.079:84): prog-id=10 op=LOAD Sep 6 00:40:11.704974 kernel: audit: type=1334 audit(1757119201.079:85): prog-id=10 op=UNLOAD Sep 6 00:40:11.704986 kernel: audit: type=1334 audit(1757119201.094:86): prog-id=11 op=LOAD Sep 6 00:40:11.704994 kernel: audit: type=1334 audit(1757119201.094:87): prog-id=11 op=UNLOAD Sep 6 00:40:11.705007 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:40:11.705017 kernel: audit: type=1400 audit(1757119202.422:88): avc: denied { associate } for pid=1051 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:40:11.705031 kernel: audit: type=1300 audit(1757119202.422:88): arch=c000003e syscall=188 success=yes exit=0 a0=c0001078bc a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1034 pid=1051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:40:11.705043 kernel: audit: type=1327 audit(1757119202.422:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:40:11.705058 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:40:11.705073 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:40:11.705085 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:40:11.705100 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:40:11.705113 kernel: kauditd_printk_skb: 6 callbacks suppressed Sep 6 00:40:11.705127 kernel: audit: type=1334 audit(1757119211.066:90): prog-id=12 op=LOAD Sep 6 00:40:11.705139 kernel: audit: type=1334 audit(1757119211.066:91): prog-id=3 op=UNLOAD Sep 6 00:40:11.705156 kernel: audit: type=1334 audit(1757119211.077:92): prog-id=13 op=LOAD Sep 6 00:40:11.705175 kernel: audit: type=1334 audit(1757119211.088:93): prog-id=14 op=LOAD Sep 6 00:40:11.705188 kernel: audit: type=1334 audit(1757119211.088:94): prog-id=4 op=UNLOAD Sep 6 00:40:11.705203 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:40:11.705220 kernel: audit: type=1334 audit(1757119211.088:95): prog-id=5 op=UNLOAD Sep 6 00:40:11.705234 kernel: audit: type=1131 audit(1757119211.094:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.705252 systemd[1]: Stopped initrd-switch-root.service. Sep 6 00:40:11.705268 kernel: audit: type=1130 audit(1757119211.133:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.705286 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:40:11.705299 kernel: audit: type=1131 audit(1757119211.133:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.705311 kernel: audit: type=1334 audit(1757119211.177:99): prog-id=12 op=UNLOAD Sep 6 00:40:11.705322 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:40:11.705335 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:40:11.705597 systemd[1]: Created slice system-getty.slice. Sep 6 00:40:11.705614 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:40:11.705627 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:40:11.705640 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:40:11.705650 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:40:11.705663 systemd[1]: Created slice user.slice. Sep 6 00:40:11.705673 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:40:11.705684 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:40:11.705696 systemd[1]: Set up automount boot.automount. Sep 6 00:40:11.705709 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:40:11.705719 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 00:40:11.705732 systemd[1]: Stopped target initrd-fs.target. Sep 6 00:40:11.705744 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 00:40:11.705755 systemd[1]: Reached target integritysetup.target. Sep 6 00:40:11.705768 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:40:11.705778 systemd[1]: Reached target remote-fs.target. Sep 6 00:40:11.705791 systemd[1]: Reached target slices.target. Sep 6 00:40:11.705801 systemd[1]: Reached target swap.target. Sep 6 00:40:11.705813 systemd[1]: Reached target torcx.target. Sep 6 00:40:11.705826 systemd[1]: Reached target veritysetup.target. Sep 6 00:40:11.705839 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:40:11.705848 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:40:11.705861 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:40:11.705872 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:40:11.705888 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:40:11.705898 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:40:11.705911 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:40:11.705921 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:40:11.705931 systemd[1]: Mounting media.mount... Sep 6 00:40:11.705943 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:40:11.705954 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:40:11.705966 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:40:11.705978 systemd[1]: Mounting tmp.mount... Sep 6 00:40:11.705991 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:40:11.706002 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:40:11.706016 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:40:11.706025 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:40:11.706038 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:40:11.706048 systemd[1]: Starting modprobe@drm.service... Sep 6 00:40:11.706061 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:40:11.706071 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:40:11.706086 systemd[1]: Starting modprobe@loop.service... Sep 6 00:40:11.706097 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:40:11.706110 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:40:11.706120 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 00:40:11.706130 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:40:11.706143 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:40:11.706152 systemd[1]: Stopped systemd-journald.service. Sep 6 00:40:11.706162 systemd[1]: Starting systemd-journald.service... Sep 6 00:40:11.706172 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:40:11.706184 kernel: loop: module loaded Sep 6 00:40:11.706193 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:40:11.706203 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:40:11.706212 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:40:11.706222 kernel: fuse: init (API version 7.34) Sep 6 00:40:11.706231 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:40:11.706242 systemd[1]: Stopped verity-setup.service. Sep 6 00:40:11.706251 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:40:11.706261 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:40:11.706274 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:40:11.706290 systemd-journald[1160]: Journal started Sep 6 00:40:11.706343 systemd-journald[1160]: Runtime Journal (/run/log/journal/65691de7753f4ffe979ed91488cbb06e) is 8.0M, max 159.0M, 151.0M free. Sep 6 00:40:00.445000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:40:01.060000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:40:01.060000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:40:01.079000 audit: BPF prog-id=10 op=LOAD Sep 6 00:40:01.079000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:40:01.094000 audit: BPF prog-id=11 op=LOAD Sep 6 00:40:01.094000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:40:02.422000 audit[1051]: AVC avc: denied { associate } for pid=1051 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:40:02.422000 audit[1051]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078bc a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1034 pid=1051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:40:02.422000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:40:02.431000 audit[1051]: AVC avc: denied { associate } for pid=1051 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:40:02.431000 audit[1051]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000107995 a2=1ed a3=0 items=2 ppid=1034 pid=1051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:40:02.431000 audit: CWD cwd="/" Sep 6 00:40:02.431000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:40:02.431000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:40:02.431000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:40:11.066000 audit: BPF prog-id=12 op=LOAD Sep 6 00:40:11.066000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:40:11.077000 audit: BPF prog-id=13 op=LOAD Sep 6 00:40:11.088000 audit: BPF prog-id=14 op=LOAD Sep 6 00:40:11.088000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:40:11.088000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:40:11.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.177000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:40:11.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.607000 audit: BPF prog-id=15 op=LOAD Sep 6 00:40:11.607000 audit: BPF prog-id=16 op=LOAD Sep 6 00:40:11.607000 audit: BPF prog-id=17 op=LOAD Sep 6 00:40:11.607000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:40:11.607000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:40:11.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.701000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:40:11.701000 audit[1160]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd74eb5e70 a2=4000 a3=7ffd74eb5f0c items=0 ppid=1 pid=1160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:40:11.701000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:40:11.065153 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:40:02.382320 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:40:11.065168 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 6 00:40:02.394981 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:40:11.094426 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:40:02.395009 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:40:02.395059 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:02Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 00:40:02.395070 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:02Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 00:40:02.395133 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:02Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 00:40:02.395149 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:02Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 00:40:02.395440 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:02Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 00:40:02.395490 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:40:02.395503 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:40:02.410284 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 00:40:02.410365 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 00:40:02.410403 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 00:40:02.410427 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 00:40:02.410460 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 00:40:02.410482 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 00:40:09.871358 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:09Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:40:09.871705 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:09Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:40:09.872276 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:09Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:40:09.872500 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:09Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:40:09.872563 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:09Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 00:40:09.872648 /usr/lib/systemd/system-generators/torcx-generator[1051]: time="2025-09-06T00:40:09Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 00:40:11.721901 systemd[1]: Started systemd-journald.service. Sep 6 00:40:11.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.725006 systemd[1]: Mounted media.mount. Sep 6 00:40:11.727566 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:40:11.730278 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:40:11.733176 systemd[1]: Mounted tmp.mount. Sep 6 00:40:11.735818 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:40:11.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.738760 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:40:11.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.741748 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:40:11.741917 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:40:11.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.744838 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:40:11.744999 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:40:11.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.747806 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:40:11.747966 systemd[1]: Finished modprobe@drm.service. Sep 6 00:40:11.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.750541 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:40:11.750721 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:40:11.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.753610 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:40:11.753793 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:40:11.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.756506 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:40:11.756682 systemd[1]: Finished modprobe@loop.service. Sep 6 00:40:11.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.759432 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:40:11.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.762718 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:40:11.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.766132 systemd[1]: Reached target network-pre.target. Sep 6 00:40:11.770195 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:40:11.774302 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:40:11.780510 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:40:11.794710 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:40:11.798799 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:40:11.801783 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:40:11.803132 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:40:11.806189 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:40:11.807505 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:40:11.813150 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:40:11.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.816621 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:40:11.820000 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:40:11.826409 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:40:11.839072 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:40:11.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.842145 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:40:11.856166 systemd-journald[1160]: Time spent on flushing to /var/log/journal/65691de7753f4ffe979ed91488cbb06e is 29.449ms for 1157 entries. Sep 6 00:40:11.856166 systemd-journald[1160]: System Journal (/var/log/journal/65691de7753f4ffe979ed91488cbb06e) is 8.0M, max 2.6G, 2.6G free. Sep 6 00:40:11.970570 systemd-journald[1160]: Received client request to flush runtime journal. Sep 6 00:40:11.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.906990 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:40:11.971906 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 00:40:11.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:11.911605 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:40:11.935844 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:40:11.971868 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:40:12.263542 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:40:12.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:12.941981 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:40:12.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:12.945000 audit: BPF prog-id=18 op=LOAD Sep 6 00:40:12.945000 audit: BPF prog-id=19 op=LOAD Sep 6 00:40:12.945000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:40:12.945000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:40:12.947038 systemd[1]: Starting systemd-udevd.service... Sep 6 00:40:12.966696 systemd-udevd[1177]: Using default interface naming scheme 'v252'. Sep 6 00:40:13.149665 systemd[1]: Started systemd-udevd.service. Sep 6 00:40:13.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:13.153000 audit: BPF prog-id=20 op=LOAD Sep 6 00:40:13.155404 systemd[1]: Starting systemd-networkd.service... Sep 6 00:40:13.193573 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 6 00:40:13.229000 audit: BPF prog-id=21 op=LOAD Sep 6 00:40:13.229000 audit: BPF prog-id=22 op=LOAD Sep 6 00:40:13.229000 audit: BPF prog-id=23 op=LOAD Sep 6 00:40:13.231399 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:40:13.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:13.284989 systemd[1]: Started systemd-userdbd.service. Sep 6 00:40:13.297733 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:40:13.289000 audit[1192]: AVC avc: denied { confidentiality } for pid=1192 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:40:13.340642 kernel: hv_vmbus: registering driver hyperv_fb Sep 6 00:40:13.355119 kernel: hv_utils: Registering HyperV Utility Driver Sep 6 00:40:13.355237 kernel: hv_vmbus: registering driver hv_utils Sep 6 00:40:13.359631 kernel: hv_vmbus: registering driver hv_balloon Sep 6 00:40:13.393065 kernel: hv_utils: Heartbeat IC version 3.0 Sep 6 00:40:13.393168 kernel: hv_utils: Shutdown IC version 3.2 Sep 6 00:40:13.393187 kernel: hv_utils: TimeSync IC version 4.0 Sep 6 00:40:13.393203 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 6 00:40:13.305998 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 6 00:40:13.400516 systemd-journald[1160]: Time jumped backwards, rotating. Sep 6 00:40:13.400641 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 6 00:40:13.400666 kernel: Console: switching to colour dummy device 80x25 Sep 6 00:40:13.400692 kernel: Console: switching to colour frame buffer device 128x48 Sep 6 00:40:13.400712 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#82 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 6 00:40:13.289000 audit[1192]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=562d3828acd0 a1=f83c a2=7f6196f61bc5 a3=5 items=12 ppid=1177 pid=1192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:40:13.289000 audit: CWD cwd="/" Sep 6 00:40:13.289000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:40:13.289000 audit: PATH item=1 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:40:13.289000 audit: PATH item=2 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:40:13.289000 audit: PATH item=3 name=(null) inode=15471 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:40:13.289000 audit: PATH item=4 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:40:13.289000 audit: PATH item=5 name=(null) inode=15472 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:40:13.289000 audit: PATH item=6 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:40:13.289000 audit: PATH item=7 name=(null) inode=15473 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:40:13.289000 audit: PATH item=8 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:40:13.289000 audit: PATH item=9 name=(null) inode=15474 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:40:13.289000 audit: PATH item=10 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:40:13.289000 audit: PATH item=11 name=(null) inode=15475 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:40:13.289000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 00:40:13.559329 systemd-networkd[1183]: lo: Link UP Sep 6 00:40:13.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:13.559347 systemd-networkd[1183]: lo: Gained carrier Sep 6 00:40:13.560001 systemd-networkd[1183]: Enumeration completed Sep 6 00:40:13.560113 systemd[1]: Started systemd-networkd.service. Sep 6 00:40:13.564845 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:40:13.582640 systemd-networkd[1183]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:40:13.583795 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:40:13.639939 kernel: mlx5_core 8849:00:02.0 enP34889s1: Link up Sep 6 00:40:13.670297 kernel: hv_netvsc 7c1e5276-d9a0-7c1e-5276-d9a07c1e5276 eth0: Data path switched to VF: enP34889s1 Sep 6 00:40:13.674759 systemd-networkd[1183]: enP34889s1: Link UP Sep 6 00:40:13.675113 systemd-networkd[1183]: eth0: Link UP Sep 6 00:40:13.675199 systemd-networkd[1183]: eth0: Gained carrier Sep 6 00:40:13.678159 systemd-networkd[1183]: enP34889s1: Gained carrier Sep 6 00:40:13.692128 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Sep 6 00:40:13.711072 systemd-networkd[1183]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 6 00:40:13.732326 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:40:13.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:13.736961 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:40:14.033307 lvm[1257]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:40:14.098471 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:40:14.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:14.101724 systemd[1]: Reached target cryptsetup.target. Sep 6 00:40:14.105888 systemd[1]: Starting lvm2-activation.service... Sep 6 00:40:14.113544 lvm[1258]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:40:14.140567 systemd[1]: Finished lvm2-activation.service. Sep 6 00:40:14.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:14.144008 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:40:14.146693 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:40:14.146737 systemd[1]: Reached target local-fs.target. Sep 6 00:40:14.149614 systemd[1]: Reached target machines.target. Sep 6 00:40:14.153694 systemd[1]: Starting ldconfig.service... Sep 6 00:40:14.156415 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:40:14.156529 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:40:14.158077 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:40:14.162193 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:40:14.169389 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:40:14.173767 systemd[1]: Starting systemd-sysext.service... Sep 6 00:40:14.716854 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1260 (bootctl) Sep 6 00:40:14.718949 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:40:14.782985 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:40:14.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:14.850652 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:40:14.866928 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:40:14.867168 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:40:14.895903 kernel: loop0: detected capacity change from 0 to 221472 Sep 6 00:40:14.932142 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:40:14.947844 kernel: loop1: detected capacity change from 0 to 221472 Sep 6 00:40:14.976331 (sd-sysext)[1272]: Using extensions 'kubernetes'. Sep 6 00:40:14.977832 (sd-sysext)[1272]: Merged extensions into '/usr'. Sep 6 00:40:14.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:14.997596 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:40:14.998479 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:40:15.000805 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:40:15.003147 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:40:15.004770 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:40:15.007684 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:40:15.011357 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:40:15.016675 systemd[1]: Starting modprobe@loop.service... Sep 6 00:40:15.018079 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:40:15.018202 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:40:15.018310 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:40:15.019362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:40:15.020129 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:40:15.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.022334 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:40:15.022580 systemd[1]: Finished modprobe@loop.service. Sep 6 00:40:15.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.024366 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:40:15.025714 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:40:15.026300 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:40:15.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.029742 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:40:15.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.031928 systemd[1]: Finished systemd-sysext.service. Sep 6 00:40:15.035646 systemd[1]: Starting ensure-sysext.service... Sep 6 00:40:15.036964 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:40:15.038593 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:40:15.049010 systemd[1]: Reloading. Sep 6 00:40:15.129724 /usr/lib/systemd/system-generators/torcx-generator[1298]: time="2025-09-06T00:40:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:40:15.129770 /usr/lib/systemd/system-generators/torcx-generator[1298]: time="2025-09-06T00:40:15Z" level=info msg="torcx already run" Sep 6 00:40:15.141963 systemd-networkd[1183]: eth0: Gained IPv6LL Sep 6 00:40:15.176877 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:40:15.226624 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:40:15.226650 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:40:15.243972 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:40:15.270337 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:40:15.314000 audit: BPF prog-id=24 op=LOAD Sep 6 00:40:15.314000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:40:15.315000 audit: BPF prog-id=25 op=LOAD Sep 6 00:40:15.315000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:40:15.315000 audit: BPF prog-id=26 op=LOAD Sep 6 00:40:15.315000 audit: BPF prog-id=27 op=LOAD Sep 6 00:40:15.315000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:40:15.315000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:40:15.316000 audit: BPF prog-id=28 op=LOAD Sep 6 00:40:15.316000 audit: BPF prog-id=29 op=LOAD Sep 6 00:40:15.316000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:40:15.316000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:40:15.317000 audit: BPF prog-id=30 op=LOAD Sep 6 00:40:15.317000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:40:15.317000 audit: BPF prog-id=31 op=LOAD Sep 6 00:40:15.317000 audit: BPF prog-id=32 op=LOAD Sep 6 00:40:15.317000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:40:15.317000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:40:15.322162 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:40:15.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.336015 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:40:15.336268 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:40:15.337803 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:40:15.341858 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:40:15.345668 systemd[1]: Starting modprobe@loop.service... Sep 6 00:40:15.348172 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:40:15.348337 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:40:15.348475 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:40:15.349316 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:40:15.349479 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:40:15.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.352798 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:40:15.352972 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:40:15.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.356210 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:40:15.356356 systemd[1]: Finished modprobe@loop.service. Sep 6 00:40:15.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.360805 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:40:15.361122 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:40:15.362583 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:40:15.367309 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:40:15.371359 systemd[1]: Starting modprobe@loop.service... Sep 6 00:40:15.374360 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:40:15.374560 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:40:15.374734 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:40:15.375996 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:40:15.376158 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:40:15.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.381724 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:40:15.381959 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:40:15.385052 systemd-fsck[1267]: fsck.fat 4.2 (2021-01-31) Sep 6 00:40:15.385052 systemd-fsck[1267]: /dev/sda1: 790 files, 120761/258078 clusters Sep 6 00:40:15.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.386558 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:40:15.386746 systemd[1]: Finished modprobe@loop.service. Sep 6 00:40:15.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.393896 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:40:15.396681 systemd[1]: Finished ensure-sysext.service. Sep 6 00:40:15.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.400675 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:40:15.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.409517 systemd[1]: Mounting boot.mount... Sep 6 00:40:15.411883 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:40:15.412266 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:40:15.413873 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:40:15.419764 systemd[1]: Starting modprobe@drm.service... Sep 6 00:40:15.424176 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:40:15.428093 systemd[1]: Starting modprobe@loop.service... Sep 6 00:40:15.430985 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:40:15.431060 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:40:15.431171 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:40:15.433278 systemd[1]: Mounted boot.mount. Sep 6 00:40:15.436161 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:40:15.436333 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:40:15.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.439595 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:40:15.439751 systemd[1]: Finished modprobe@drm.service. Sep 6 00:40:15.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.442576 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:40:15.442737 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:40:15.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.446146 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:40:15.446301 systemd[1]: Finished modprobe@loop.service. Sep 6 00:40:15.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:15.449438 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:40:15.449491 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:40:15.450570 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:40:15.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:16.136523 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:40:16.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:16.141660 systemd[1]: Starting audit-rules.service... Sep 6 00:40:16.142884 kernel: kauditd_printk_skb: 118 callbacks suppressed Sep 6 00:40:16.142950 kernel: audit: type=1130 audit(1757119216.138:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:16.161774 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:40:16.166066 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:40:16.177176 kernel: audit: type=1334 audit(1757119216.169:202): prog-id=33 op=LOAD Sep 6 00:40:16.169000 audit: BPF prog-id=33 op=LOAD Sep 6 00:40:16.175442 systemd[1]: Starting systemd-resolved.service... Sep 6 00:40:16.181000 audit: BPF prog-id=34 op=LOAD Sep 6 00:40:16.184152 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:40:16.187884 kernel: audit: type=1334 audit(1757119216.181:203): prog-id=34 op=LOAD Sep 6 00:40:16.191626 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:40:16.228615 kernel: audit: type=1127 audit(1757119216.211:204): pid=1384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:40:16.211000 audit[1384]: SYSTEM_BOOT pid=1384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:40:16.235299 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:40:16.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:16.238840 kernel: audit: type=1130 audit(1757119216.237:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:16.252324 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:40:16.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:16.271879 kernel: audit: type=1130 audit(1757119216.254:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:16.255470 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:40:16.311037 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:40:16.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:16.331528 kernel: audit: type=1130 audit(1757119216.313:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:16.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:16.331861 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:40:16.334909 systemd[1]: Reached target time-set.target. Sep 6 00:40:16.354016 kernel: audit: type=1130 audit(1757119216.333:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:16.363310 systemd-resolved[1379]: Positive Trust Anchors: Sep 6 00:40:16.363323 systemd-resolved[1379]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:40:16.363351 systemd-resolved[1379]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:40:16.455798 systemd-resolved[1379]: Using system hostname 'ci-3510.3.8-n-cde0707216'. Sep 6 00:40:16.457725 systemd[1]: Started systemd-resolved.service. Sep 6 00:40:16.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:16.460785 systemd[1]: Reached target network.target. Sep 6 00:40:16.476218 kernel: audit: type=1130 audit(1757119216.459:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:40:16.479376 systemd[1]: Reached target network-online.target. Sep 6 00:40:16.483965 systemd[1]: Reached target nss-lookup.target. Sep 6 00:40:16.516178 systemd-timesyncd[1381]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Sep 6 00:40:16.516329 systemd-timesyncd[1381]: Initial clock synchronization to Sat 2025-09-06 00:40:16.516064 UTC. Sep 6 00:40:16.575000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:40:16.577362 systemd[1]: Finished audit-rules.service. Sep 6 00:40:16.582148 augenrules[1397]: No rules Sep 6 00:40:16.575000 audit[1397]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe1ee28770 a2=420 a3=0 items=0 ppid=1376 pid=1397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:40:16.586997 kernel: audit: type=1305 audit(1757119216.575:210): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:40:16.575000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:40:20.929105 ldconfig[1259]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:40:20.938707 systemd[1]: Finished ldconfig.service. Sep 6 00:40:20.943266 systemd[1]: Starting systemd-update-done.service... Sep 6 00:40:20.964361 systemd[1]: Finished systemd-update-done.service. Sep 6 00:40:20.967589 systemd[1]: Reached target sysinit.target. Sep 6 00:40:20.970386 systemd[1]: Started motdgen.path. Sep 6 00:40:20.972937 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:40:20.976362 systemd[1]: Started logrotate.timer. Sep 6 00:40:20.978861 systemd[1]: Started mdadm.timer. Sep 6 00:40:20.980954 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:40:20.983583 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:40:20.983624 systemd[1]: Reached target paths.target. Sep 6 00:40:20.985859 systemd[1]: Reached target timers.target. Sep 6 00:40:20.988716 systemd[1]: Listening on dbus.socket. Sep 6 00:40:20.992027 systemd[1]: Starting docker.socket... Sep 6 00:40:21.009038 systemd[1]: Listening on sshd.socket. Sep 6 00:40:21.011874 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:40:21.012384 systemd[1]: Listening on docker.socket. Sep 6 00:40:21.014904 systemd[1]: Reached target sockets.target. Sep 6 00:40:21.023656 systemd[1]: Reached target basic.target. Sep 6 00:40:21.026031 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:40:21.026071 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:40:21.027421 systemd[1]: Starting containerd.service... Sep 6 00:40:21.031738 systemd[1]: Starting dbus.service... Sep 6 00:40:21.035665 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:40:21.040164 systemd[1]: Starting extend-filesystems.service... Sep 6 00:40:21.042673 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:40:21.076693 systemd[1]: Starting kubelet.service... Sep 6 00:40:21.081266 systemd[1]: Starting motdgen.service... Sep 6 00:40:21.085399 systemd[1]: Started nvidia.service. Sep 6 00:40:21.089915 systemd[1]: Starting prepare-helm.service... Sep 6 00:40:21.094333 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:40:21.099899 systemd[1]: Starting sshd-keygen.service... Sep 6 00:40:21.108078 systemd[1]: Starting systemd-logind.service... Sep 6 00:40:21.111560 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:40:21.111651 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:40:21.112287 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:40:21.114312 systemd[1]: Starting update-engine.service... Sep 6 00:40:21.118625 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:40:21.132257 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:40:21.132561 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:40:21.154803 jq[1407]: false Sep 6 00:40:21.157069 jq[1424]: true Sep 6 00:40:21.157525 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:40:21.157867 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:40:21.179735 extend-filesystems[1408]: Found loop1 Sep 6 00:40:21.179735 extend-filesystems[1408]: Found sda Sep 6 00:40:21.179735 extend-filesystems[1408]: Found sda1 Sep 6 00:40:21.179735 extend-filesystems[1408]: Found sda2 Sep 6 00:40:21.179735 extend-filesystems[1408]: Found sda3 Sep 6 00:40:21.179735 extend-filesystems[1408]: Found usr Sep 6 00:40:21.179735 extend-filesystems[1408]: Found sda4 Sep 6 00:40:21.179735 extend-filesystems[1408]: Found sda6 Sep 6 00:40:21.230287 extend-filesystems[1408]: Found sda7 Sep 6 00:40:21.230287 extend-filesystems[1408]: Found sda9 Sep 6 00:40:21.230287 extend-filesystems[1408]: Checking size of /dev/sda9 Sep 6 00:40:21.256999 tar[1431]: linux-amd64/helm Sep 6 00:40:21.260092 jq[1438]: true Sep 6 00:40:21.188971 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:40:21.189231 systemd[1]: Finished motdgen.service. Sep 6 00:40:21.317747 env[1434]: time="2025-09-06T00:40:21.317671638Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:40:21.342878 extend-filesystems[1408]: Old size kept for /dev/sda9 Sep 6 00:40:21.346399 extend-filesystems[1408]: Found sr0 Sep 6 00:40:21.349795 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:40:21.350053 systemd[1]: Finished extend-filesystems.service. Sep 6 00:40:21.370326 systemd-logind[1421]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 00:40:21.371930 systemd-logind[1421]: New seat seat0. Sep 6 00:40:21.377828 dbus-daemon[1406]: [system] SELinux support is enabled Sep 6 00:40:21.378074 systemd[1]: Started dbus.service. Sep 6 00:40:21.387914 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:40:21.387951 systemd[1]: Reached target system-config.target. Sep 6 00:40:21.391099 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:40:21.391123 systemd[1]: Reached target user-config.target. Sep 6 00:40:21.396964 systemd[1]: Started systemd-logind.service. Sep 6 00:40:21.400980 dbus-daemon[1406]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 6 00:40:21.480841 env[1434]: time="2025-09-06T00:40:21.463640081Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:40:21.445639 systemd[1]: nvidia.service: Deactivated successfully. Sep 6 00:40:21.483366 env[1434]: time="2025-09-06T00:40:21.483033973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:40:21.489164 bash[1460]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:40:21.485316 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:40:21.489749 env[1434]: time="2025-09-06T00:40:21.489587970Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:40:21.489749 env[1434]: time="2025-09-06T00:40:21.489633870Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:40:21.490377 env[1434]: time="2025-09-06T00:40:21.490344370Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:40:21.490488 env[1434]: time="2025-09-06T00:40:21.490469970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:40:21.490566 env[1434]: time="2025-09-06T00:40:21.490550470Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:40:21.490646 env[1434]: time="2025-09-06T00:40:21.490632370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:40:21.490805 env[1434]: time="2025-09-06T00:40:21.490790870Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:40:21.493424 env[1434]: time="2025-09-06T00:40:21.493398269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:40:21.493742 env[1434]: time="2025-09-06T00:40:21.493717069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:40:21.493841 env[1434]: time="2025-09-06T00:40:21.493807169Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:40:21.493970 env[1434]: time="2025-09-06T00:40:21.493951869Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:40:21.494051 env[1434]: time="2025-09-06T00:40:21.494037669Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:40:21.509994 env[1434]: time="2025-09-06T00:40:21.507905863Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:40:21.509994 env[1434]: time="2025-09-06T00:40:21.507975463Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:40:21.509994 env[1434]: time="2025-09-06T00:40:21.507998063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:40:21.509994 env[1434]: time="2025-09-06T00:40:21.508082163Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:40:21.509994 env[1434]: time="2025-09-06T00:40:21.508156663Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:40:21.509994 env[1434]: time="2025-09-06T00:40:21.508181163Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:40:21.509994 env[1434]: time="2025-09-06T00:40:21.508212263Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:40:21.509994 env[1434]: time="2025-09-06T00:40:21.508233463Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:40:21.509994 env[1434]: time="2025-09-06T00:40:21.508254363Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:40:21.509994 env[1434]: time="2025-09-06T00:40:21.508285163Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:40:21.509994 env[1434]: time="2025-09-06T00:40:21.508304863Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:40:21.509994 env[1434]: time="2025-09-06T00:40:21.508324463Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:40:21.509994 env[1434]: time="2025-09-06T00:40:21.508498863Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:40:21.509994 env[1434]: time="2025-09-06T00:40:21.508624463Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:40:21.510702 env[1434]: time="2025-09-06T00:40:21.509167963Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:40:21.510702 env[1434]: time="2025-09-06T00:40:21.509213863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:40:21.510702 env[1434]: time="2025-09-06T00:40:21.509246163Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:40:21.510702 env[1434]: time="2025-09-06T00:40:21.509345063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:40:21.510702 env[1434]: time="2025-09-06T00:40:21.509366163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:40:21.510702 env[1434]: time="2025-09-06T00:40:21.509476063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:40:21.510702 env[1434]: time="2025-09-06T00:40:21.509498163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:40:21.510702 env[1434]: time="2025-09-06T00:40:21.509519663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:40:21.510702 env[1434]: time="2025-09-06T00:40:21.509554163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:40:21.510702 env[1434]: time="2025-09-06T00:40:21.509576263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:40:21.510702 env[1434]: time="2025-09-06T00:40:21.509598463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:40:21.510702 env[1434]: time="2025-09-06T00:40:21.509638863Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:40:21.510702 env[1434]: time="2025-09-06T00:40:21.509854662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:40:21.510702 env[1434]: time="2025-09-06T00:40:21.509880962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:40:21.510702 env[1434]: time="2025-09-06T00:40:21.509912762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:40:21.511259 env[1434]: time="2025-09-06T00:40:21.509932462Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:40:21.511259 env[1434]: time="2025-09-06T00:40:21.509961162Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:40:21.512798 env[1434]: time="2025-09-06T00:40:21.511374462Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:40:21.512798 env[1434]: time="2025-09-06T00:40:21.511440462Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:40:21.512798 env[1434]: time="2025-09-06T00:40:21.511492062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:40:21.513012 env[1434]: time="2025-09-06T00:40:21.511867062Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:40:21.513012 env[1434]: time="2025-09-06T00:40:21.511969462Z" level=info msg="Connect containerd service" Sep 6 00:40:21.513012 env[1434]: time="2025-09-06T00:40:21.512038562Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:40:21.539514 env[1434]: time="2025-09-06T00:40:21.513508361Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:40:21.539514 env[1434]: time="2025-09-06T00:40:21.513640661Z" level=info msg="Start subscribing containerd event" Sep 6 00:40:21.539514 env[1434]: time="2025-09-06T00:40:21.513707861Z" level=info msg="Start recovering state" Sep 6 00:40:21.539514 env[1434]: time="2025-09-06T00:40:21.513787761Z" level=info msg="Start event monitor" Sep 6 00:40:21.539514 env[1434]: time="2025-09-06T00:40:21.513807961Z" level=info msg="Start snapshots syncer" Sep 6 00:40:21.539514 env[1434]: time="2025-09-06T00:40:21.513847661Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:40:21.539514 env[1434]: time="2025-09-06T00:40:21.513859261Z" level=info msg="Start streaming server" Sep 6 00:40:21.539514 env[1434]: time="2025-09-06T00:40:21.514411661Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:40:21.539514 env[1434]: time="2025-09-06T00:40:21.514541261Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:40:21.539514 env[1434]: time="2025-09-06T00:40:21.521008358Z" level=info msg="containerd successfully booted in 0.204471s" Sep 6 00:40:21.521132 systemd[1]: Started containerd.service. Sep 6 00:40:21.958592 update_engine[1422]: I0906 00:40:21.956769 1422 main.cc:92] Flatcar Update Engine starting Sep 6 00:40:22.034044 systemd[1]: Started update-engine.service. Sep 6 00:40:22.036094 update_engine[1422]: I0906 00:40:22.035974 1422 update_check_scheduler.cc:74] Next update check in 2m36s Sep 6 00:40:22.040166 systemd[1]: Started locksmithd.service. Sep 6 00:40:22.071759 tar[1431]: linux-amd64/LICENSE Sep 6 00:40:22.071961 tar[1431]: linux-amd64/README.md Sep 6 00:40:22.080124 systemd[1]: Finished prepare-helm.service. Sep 6 00:40:23.093125 systemd[1]: Started kubelet.service. Sep 6 00:40:23.151239 sshd_keygen[1432]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:40:23.175420 systemd[1]: Finished sshd-keygen.service. Sep 6 00:40:23.180774 systemd[1]: Starting issuegen.service... Sep 6 00:40:23.185034 systemd[1]: Started waagent.service. Sep 6 00:40:23.193707 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:40:23.193983 systemd[1]: Finished issuegen.service. Sep 6 00:40:23.198772 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:40:23.221333 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:40:23.226410 systemd[1]: Started getty@tty1.service. Sep 6 00:40:23.230688 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:40:23.233652 systemd[1]: Reached target getty.target. Sep 6 00:40:23.236575 systemd[1]: Reached target multi-user.target. Sep 6 00:40:23.249487 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:40:23.263864 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:40:23.264109 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:40:23.267567 systemd[1]: Startup finished in 1.109s (kernel) + 13.067s (initrd) + 23.546s (userspace) = 37.722s. Sep 6 00:40:23.417624 locksmithd[1510]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:40:23.621158 login[1532]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 6 00:40:23.624792 login[1533]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 6 00:40:23.649738 systemd[1]: Created slice user-500.slice. Sep 6 00:40:23.651509 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:40:23.654779 systemd-logind[1421]: New session 2 of user core. Sep 6 00:40:23.660295 systemd-logind[1421]: New session 1 of user core. Sep 6 00:40:23.666483 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:40:23.668740 systemd[1]: Starting user@500.service... Sep 6 00:40:23.674389 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:40:23.827981 systemd[1541]: Queued start job for default target default.target. Sep 6 00:40:23.828847 systemd[1541]: Reached target paths.target. Sep 6 00:40:23.828883 systemd[1541]: Reached target sockets.target. Sep 6 00:40:23.828903 systemd[1541]: Reached target timers.target. Sep 6 00:40:23.828921 systemd[1541]: Reached target basic.target. Sep 6 00:40:23.829076 systemd[1]: Started user@500.service. Sep 6 00:40:23.830517 systemd[1]: Started session-1.scope. Sep 6 00:40:23.831086 systemd[1541]: Reached target default.target. Sep 6 00:40:23.831156 systemd[1541]: Startup finished in 144ms. Sep 6 00:40:23.831441 systemd[1]: Started session-2.scope. Sep 6 00:40:23.908805 kubelet[1518]: E0906 00:40:23.908742 1518 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:40:23.910947 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:40:23.911082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:40:23.911419 systemd[1]: kubelet.service: Consumed 1.125s CPU time. Sep 6 00:40:28.585858 waagent[1527]: 2025-09-06T00:40:28.585719Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Sep 6 00:40:28.590710 waagent[1527]: 2025-09-06T00:40:28.590625Z INFO Daemon Daemon OS: flatcar 3510.3.8 Sep 6 00:40:28.599704 waagent[1527]: 2025-09-06T00:40:28.599620Z INFO Daemon Daemon Python: 3.9.16 Sep 6 00:40:28.602882 waagent[1527]: 2025-09-06T00:40:28.602781Z INFO Daemon Daemon Run daemon Sep 6 00:40:28.606105 waagent[1527]: 2025-09-06T00:40:28.605929Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Sep 6 00:40:28.617523 waagent[1527]: 2025-09-06T00:40:28.617404Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 6 00:40:28.626235 waagent[1527]: 2025-09-06T00:40:28.626132Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 6 00:40:28.632143 waagent[1527]: 2025-09-06T00:40:28.632079Z INFO Daemon Daemon cloud-init is enabled: False Sep 6 00:40:28.635531 waagent[1527]: 2025-09-06T00:40:28.635465Z INFO Daemon Daemon Using waagent for provisioning Sep 6 00:40:28.639378 waagent[1527]: 2025-09-06T00:40:28.639319Z INFO Daemon Daemon Activate resource disk Sep 6 00:40:28.642719 waagent[1527]: 2025-09-06T00:40:28.642658Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 6 00:40:28.654353 waagent[1527]: 2025-09-06T00:40:28.654267Z INFO Daemon Daemon Found device: None Sep 6 00:40:28.657688 waagent[1527]: 2025-09-06T00:40:28.657616Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 6 00:40:28.663275 waagent[1527]: 2025-09-06T00:40:28.663208Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 6 00:40:28.670833 waagent[1527]: 2025-09-06T00:40:28.670745Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 6 00:40:28.675089 waagent[1527]: 2025-09-06T00:40:28.675023Z INFO Daemon Daemon Running default provisioning handler Sep 6 00:40:28.688039 waagent[1527]: 2025-09-06T00:40:28.687906Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 6 00:40:28.697742 waagent[1527]: 2025-09-06T00:40:28.697630Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 6 00:40:28.706543 waagent[1527]: 2025-09-06T00:40:28.706446Z INFO Daemon Daemon cloud-init is enabled: False Sep 6 00:40:28.710011 waagent[1527]: 2025-09-06T00:40:28.709939Z INFO Daemon Daemon Copying ovf-env.xml Sep 6 00:40:28.768856 waagent[1527]: 2025-09-06T00:40:28.764583Z INFO Daemon Daemon Successfully mounted dvd Sep 6 00:40:28.798572 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 6 00:40:28.814732 waagent[1527]: 2025-09-06T00:40:28.814579Z INFO Daemon Daemon Detect protocol endpoint Sep 6 00:40:28.819289 waagent[1527]: 2025-09-06T00:40:28.819194Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 6 00:40:28.823195 waagent[1527]: 2025-09-06T00:40:28.823116Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 6 00:40:28.827460 waagent[1527]: 2025-09-06T00:40:28.827399Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 6 00:40:28.831173 waagent[1527]: 2025-09-06T00:40:28.831110Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 6 00:40:28.834524 waagent[1527]: 2025-09-06T00:40:28.834464Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 6 00:40:28.929185 waagent[1527]: 2025-09-06T00:40:28.929003Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 6 00:40:28.939155 waagent[1527]: 2025-09-06T00:40:28.930031Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 6 00:40:28.939155 waagent[1527]: 2025-09-06T00:40:28.931227Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 6 00:40:29.491498 waagent[1527]: 2025-09-06T00:40:29.491305Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 6 00:40:29.504429 waagent[1527]: 2025-09-06T00:40:29.504334Z INFO Daemon Daemon Forcing an update of the goal state.. Sep 6 00:40:29.508454 waagent[1527]: 2025-09-06T00:40:29.508374Z INFO Daemon Daemon Fetching goal state [incarnation 1] Sep 6 00:40:29.588945 waagent[1527]: 2025-09-06T00:40:29.588773Z INFO Daemon Daemon Found private key matching thumbprint F20C9784B1E9BEFB38F91E691B194BBD16386307 Sep 6 00:40:29.596752 waagent[1527]: 2025-09-06T00:40:29.596632Z INFO Daemon Daemon Fetch goal state completed Sep 6 00:40:29.641610 waagent[1527]: 2025-09-06T00:40:29.641525Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 6017a80e-b730-4fdc-ac19-2565fde33c32 New eTag: 9527822865016669468] Sep 6 00:40:29.648721 waagent[1527]: 2025-09-06T00:40:29.648618Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Sep 6 00:40:29.662031 waagent[1527]: 2025-09-06T00:40:29.661944Z INFO Daemon Daemon Starting provisioning Sep 6 00:40:29.665506 waagent[1527]: 2025-09-06T00:40:29.665424Z INFO Daemon Daemon Handle ovf-env.xml. Sep 6 00:40:29.668509 waagent[1527]: 2025-09-06T00:40:29.668438Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-cde0707216] Sep 6 00:40:29.688358 waagent[1527]: 2025-09-06T00:40:29.688226Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-cde0707216] Sep 6 00:40:29.694929 waagent[1527]: 2025-09-06T00:40:29.689082Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 6 00:40:29.694929 waagent[1527]: 2025-09-06T00:40:29.690213Z INFO Daemon Daemon Primary interface is [eth0] Sep 6 00:40:29.705843 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Sep 6 00:40:29.706133 systemd[1]: Stopped systemd-networkd-wait-online.service. Sep 6 00:40:29.706224 systemd[1]: Stopping systemd-networkd-wait-online.service... Sep 6 00:40:29.706609 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:40:29.711869 systemd-networkd[1183]: eth0: DHCPv6 lease lost Sep 6 00:40:29.713566 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:40:29.713758 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:40:29.716531 systemd[1]: Starting systemd-networkd.service... Sep 6 00:40:29.750972 systemd-networkd[1585]: enP34889s1: Link UP Sep 6 00:40:29.750984 systemd-networkd[1585]: enP34889s1: Gained carrier Sep 6 00:40:29.752423 systemd-networkd[1585]: eth0: Link UP Sep 6 00:40:29.752433 systemd-networkd[1585]: eth0: Gained carrier Sep 6 00:40:29.752913 systemd-networkd[1585]: lo: Link UP Sep 6 00:40:29.752922 systemd-networkd[1585]: lo: Gained carrier Sep 6 00:40:29.753258 systemd-networkd[1585]: eth0: Gained IPv6LL Sep 6 00:40:29.753560 systemd-networkd[1585]: Enumeration completed Sep 6 00:40:29.757021 waagent[1527]: 2025-09-06T00:40:29.755605Z INFO Daemon Daemon Create user account if not exists Sep 6 00:40:29.757021 waagent[1527]: 2025-09-06T00:40:29.756408Z INFO Daemon Daemon User core already exists, skip useradd Sep 6 00:40:29.753692 systemd[1]: Started systemd-networkd.service. Sep 6 00:40:29.758832 waagent[1527]: 2025-09-06T00:40:29.757483Z INFO Daemon Daemon Configure sudoer Sep 6 00:40:29.759444 waagent[1527]: 2025-09-06T00:40:29.759382Z INFO Daemon Daemon Configure sshd Sep 6 00:40:29.760368 waagent[1527]: 2025-09-06T00:40:29.760314Z INFO Daemon Daemon Deploy ssh public key. Sep 6 00:40:29.771012 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:40:29.777384 systemd-networkd[1585]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:40:29.800920 systemd-networkd[1585]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 6 00:40:29.806411 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:40:30.854286 waagent[1527]: 2025-09-06T00:40:30.854191Z INFO Daemon Daemon Provisioning complete Sep 6 00:40:30.867277 waagent[1527]: 2025-09-06T00:40:30.867184Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 6 00:40:30.876249 waagent[1527]: 2025-09-06T00:40:30.867787Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 6 00:40:30.876249 waagent[1527]: 2025-09-06T00:40:30.869849Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Sep 6 00:40:31.155582 waagent[1591]: 2025-09-06T00:40:31.155364Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Sep 6 00:40:31.156368 waagent[1591]: 2025-09-06T00:40:31.156294Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 00:40:31.156525 waagent[1591]: 2025-09-06T00:40:31.156468Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 00:40:31.167691 waagent[1591]: 2025-09-06T00:40:31.167600Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Sep 6 00:40:31.167894 waagent[1591]: 2025-09-06T00:40:31.167841Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Sep 6 00:40:31.221404 waagent[1591]: 2025-09-06T00:40:31.221243Z INFO ExtHandler ExtHandler Found private key matching thumbprint F20C9784B1E9BEFB38F91E691B194BBD16386307 Sep 6 00:40:31.221756 waagent[1591]: 2025-09-06T00:40:31.221686Z INFO ExtHandler ExtHandler Fetch goal state completed Sep 6 00:40:31.235542 waagent[1591]: 2025-09-06T00:40:31.235463Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: e2279a93-c6e9-47e7-8f13-30c8e149628b New eTag: 9527822865016669468] Sep 6 00:40:31.236176 waagent[1591]: 2025-09-06T00:40:31.236112Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Sep 6 00:40:31.314937 waagent[1591]: 2025-09-06T00:40:31.314739Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 6 00:40:31.324431 waagent[1591]: 2025-09-06T00:40:31.324327Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1591 Sep 6 00:40:31.328020 waagent[1591]: 2025-09-06T00:40:31.327948Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 6 00:40:31.329222 waagent[1591]: 2025-09-06T00:40:31.329160Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 6 00:40:31.393780 waagent[1591]: 2025-09-06T00:40:31.393702Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 6 00:40:31.394333 waagent[1591]: 2025-09-06T00:40:31.394259Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 6 00:40:31.403716 waagent[1591]: 2025-09-06T00:40:31.403651Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 6 00:40:31.404291 waagent[1591]: 2025-09-06T00:40:31.404224Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 6 00:40:31.405494 waagent[1591]: 2025-09-06T00:40:31.405425Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Sep 6 00:40:31.406963 waagent[1591]: 2025-09-06T00:40:31.406858Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 6 00:40:31.408093 waagent[1591]: 2025-09-06T00:40:31.408034Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 00:40:31.408253 waagent[1591]: 2025-09-06T00:40:31.408203Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 00:40:31.408838 waagent[1591]: 2025-09-06T00:40:31.408764Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 6 00:40:31.409161 waagent[1591]: 2025-09-06T00:40:31.409101Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 6 00:40:31.409161 waagent[1591]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 6 00:40:31.409161 waagent[1591]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 6 00:40:31.409161 waagent[1591]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 6 00:40:31.409161 waagent[1591]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 6 00:40:31.409161 waagent[1591]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 00:40:31.409161 waagent[1591]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 00:40:31.412377 waagent[1591]: 2025-09-06T00:40:31.412286Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 6 00:40:31.412675 waagent[1591]: 2025-09-06T00:40:31.412615Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 00:40:31.413191 waagent[1591]: 2025-09-06T00:40:31.413134Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 00:40:31.413650 waagent[1591]: 2025-09-06T00:40:31.413590Z INFO EnvHandler ExtHandler Configure routes Sep 6 00:40:31.413796 waagent[1591]: 2025-09-06T00:40:31.413746Z INFO EnvHandler ExtHandler Gateway:None Sep 6 00:40:31.413970 waagent[1591]: 2025-09-06T00:40:31.413922Z INFO EnvHandler ExtHandler Routes:None Sep 6 00:40:31.414712 waagent[1591]: 2025-09-06T00:40:31.414654Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 6 00:40:31.414877 waagent[1591]: 2025-09-06T00:40:31.414806Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 6 00:40:31.415488 waagent[1591]: 2025-09-06T00:40:31.415424Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 6 00:40:31.415628 waagent[1591]: 2025-09-06T00:40:31.415578Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 6 00:40:31.415903 waagent[1591]: 2025-09-06T00:40:31.415851Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 6 00:40:31.426348 waagent[1591]: 2025-09-06T00:40:31.426277Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Sep 6 00:40:31.427487 waagent[1591]: 2025-09-06T00:40:31.427227Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 6 00:40:31.431368 waagent[1591]: 2025-09-06T00:40:31.431284Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Sep 6 00:40:31.444010 waagent[1591]: 2025-09-06T00:40:31.443932Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1585' Sep 6 00:40:31.473658 waagent[1591]: 2025-09-06T00:40:31.473564Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Sep 6 00:40:31.517618 waagent[1591]: 2025-09-06T00:40:31.515902Z INFO MonitorHandler ExtHandler Network interfaces: Sep 6 00:40:31.517618 waagent[1591]: Executing ['ip', '-a', '-o', 'link']: Sep 6 00:40:31.517618 waagent[1591]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 6 00:40:31.517618 waagent[1591]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:76:d9:a0 brd ff:ff:ff:ff:ff:ff Sep 6 00:40:31.517618 waagent[1591]: 3: enP34889s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:76:d9:a0 brd ff:ff:ff:ff:ff:ff\ altname enP34889p0s2 Sep 6 00:40:31.517618 waagent[1591]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 6 00:40:31.517618 waagent[1591]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 6 00:40:31.517618 waagent[1591]: 2: eth0 inet 10.200.8.17/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 6 00:40:31.517618 waagent[1591]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 6 00:40:31.517618 waagent[1591]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 6 00:40:31.517618 waagent[1591]: 2: eth0 inet6 fe80::7e1e:52ff:fe76:d9a0/64 scope link \ valid_lft forever preferred_lft forever Sep 6 00:40:31.750052 waagent[1591]: 2025-09-06T00:40:31.749803Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Sep 6 00:40:31.756259 waagent[1591]: 2025-09-06T00:40:31.756163Z INFO EnvHandler ExtHandler Firewall rules: Sep 6 00:40:31.756259 waagent[1591]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 00:40:31.756259 waagent[1591]: pkts bytes target prot opt in out source destination Sep 6 00:40:31.756259 waagent[1591]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 6 00:40:31.756259 waagent[1591]: pkts bytes target prot opt in out source destination Sep 6 00:40:31.756259 waagent[1591]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 00:40:31.756259 waagent[1591]: pkts bytes target prot opt in out source destination Sep 6 00:40:31.756259 waagent[1591]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 6 00:40:31.756259 waagent[1591]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 6 00:40:31.761044 waagent[1591]: 2025-09-06T00:40:31.760978Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 6 00:40:31.764727 waagent[1591]: 2025-09-06T00:40:31.764656Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Sep 6 00:40:31.873190 waagent[1527]: 2025-09-06T00:40:31.873030Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Sep 6 00:40:31.878568 waagent[1527]: 2025-09-06T00:40:31.878494Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Sep 6 00:40:32.970419 waagent[1628]: 2025-09-06T00:40:32.970288Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Sep 6 00:40:32.971229 waagent[1628]: 2025-09-06T00:40:32.971157Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Sep 6 00:40:32.971400 waagent[1628]: 2025-09-06T00:40:32.971344Z INFO ExtHandler ExtHandler Python: 3.9.16 Sep 6 00:40:32.971571 waagent[1628]: 2025-09-06T00:40:32.971521Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Sep 6 00:40:32.989203 waagent[1628]: 2025-09-06T00:40:32.989073Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 6 00:40:32.989676 waagent[1628]: 2025-09-06T00:40:32.989609Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 00:40:32.989890 waagent[1628]: 2025-09-06T00:40:32.989835Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 00:40:32.990144 waagent[1628]: 2025-09-06T00:40:32.990092Z INFO ExtHandler ExtHandler Initializing the goal state... Sep 6 00:40:33.002167 waagent[1628]: 2025-09-06T00:40:33.002085Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 6 00:40:33.009938 waagent[1628]: 2025-09-06T00:40:33.009871Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 6 00:40:33.010925 waagent[1628]: 2025-09-06T00:40:33.010866Z INFO ExtHandler Sep 6 00:40:33.011105 waagent[1628]: 2025-09-06T00:40:33.011053Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c00b24a8-2e11-49a4-a869-7b7c04c9131e eTag: 9527822865016669468 source: Fabric] Sep 6 00:40:33.011862 waagent[1628]: 2025-09-06T00:40:33.011789Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 6 00:40:33.013065 waagent[1628]: 2025-09-06T00:40:33.013003Z INFO ExtHandler Sep 6 00:40:33.013219 waagent[1628]: 2025-09-06T00:40:33.013169Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 6 00:40:33.019794 waagent[1628]: 2025-09-06T00:40:33.019738Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 6 00:40:33.020316 waagent[1628]: 2025-09-06T00:40:33.020264Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 6 00:40:33.041987 waagent[1628]: 2025-09-06T00:40:33.041910Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Sep 6 00:40:33.101174 waagent[1628]: 2025-09-06T00:40:33.101024Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F20C9784B1E9BEFB38F91E691B194BBD16386307', 'hasPrivateKey': True} Sep 6 00:40:33.102627 waagent[1628]: 2025-09-06T00:40:33.102541Z INFO ExtHandler Fetch goal state from WireServer completed Sep 6 00:40:33.103535 waagent[1628]: 2025-09-06T00:40:33.103469Z INFO ExtHandler ExtHandler Goal state initialization completed. Sep 6 00:40:33.122086 waagent[1628]: 2025-09-06T00:40:33.121967Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Sep 6 00:40:33.131525 waagent[1628]: 2025-09-06T00:40:33.131402Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 6 00:40:33.136039 waagent[1628]: 2025-09-06T00:40:33.135911Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Sep 6 00:40:33.136319 waagent[1628]: 2025-09-06T00:40:33.136257Z INFO ExtHandler ExtHandler Checking state of the firewall Sep 6 00:40:33.162087 waagent[1628]: 2025-09-06T00:40:33.161940Z WARNING ExtHandler ExtHandler The firewall rules for Azure Fabric are not setup correctly (the environment thread will fix it): The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Current state: Sep 6 00:40:33.162087 waagent[1628]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 00:40:33.162087 waagent[1628]: pkts bytes target prot opt in out source destination Sep 6 00:40:33.162087 waagent[1628]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 6 00:40:33.162087 waagent[1628]: pkts bytes target prot opt in out source destination Sep 6 00:40:33.162087 waagent[1628]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 00:40:33.162087 waagent[1628]: pkts bytes target prot opt in out source destination Sep 6 00:40:33.162087 waagent[1628]: 55 7857 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 6 00:40:33.162087 waagent[1628]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 6 00:40:33.163355 waagent[1628]: 2025-09-06T00:40:33.163281Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Sep 6 00:40:33.166362 waagent[1628]: 2025-09-06T00:40:33.166257Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Sep 6 00:40:33.166632 waagent[1628]: 2025-09-06T00:40:33.166577Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 6 00:40:33.167061 waagent[1628]: 2025-09-06T00:40:33.167003Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 6 00:40:33.176227 waagent[1628]: 2025-09-06T00:40:33.176166Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 6 00:40:33.176769 waagent[1628]: 2025-09-06T00:40:33.176707Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 6 00:40:33.184563 waagent[1628]: 2025-09-06T00:40:33.184491Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1628 Sep 6 00:40:33.187772 waagent[1628]: 2025-09-06T00:40:33.187702Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 6 00:40:33.188570 waagent[1628]: 2025-09-06T00:40:33.188506Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Sep 6 00:40:33.189442 waagent[1628]: 2025-09-06T00:40:33.189382Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 6 00:40:33.192043 waagent[1628]: 2025-09-06T00:40:33.191981Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Sep 6 00:40:33.192400 waagent[1628]: 2025-09-06T00:40:33.192344Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 6 00:40:33.193756 waagent[1628]: 2025-09-06T00:40:33.193696Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 6 00:40:33.194280 waagent[1628]: 2025-09-06T00:40:33.194222Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 00:40:33.194469 waagent[1628]: 2025-09-06T00:40:33.194410Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 00:40:33.195061 waagent[1628]: 2025-09-06T00:40:33.195005Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 6 00:40:33.195528 waagent[1628]: 2025-09-06T00:40:33.195472Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 6 00:40:33.196430 waagent[1628]: 2025-09-06T00:40:33.196375Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 00:40:33.196620 waagent[1628]: 2025-09-06T00:40:33.196551Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 6 00:40:33.196620 waagent[1628]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 6 00:40:33.196620 waagent[1628]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 6 00:40:33.196620 waagent[1628]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 6 00:40:33.196620 waagent[1628]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 6 00:40:33.196620 waagent[1628]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 00:40:33.196620 waagent[1628]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 00:40:33.196925 waagent[1628]: 2025-09-06T00:40:33.196761Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 00:40:33.197235 waagent[1628]: 2025-09-06T00:40:33.197180Z INFO EnvHandler ExtHandler Configure routes Sep 6 00:40:33.197871 waagent[1628]: 2025-09-06T00:40:33.197796Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 6 00:40:33.198102 waagent[1628]: 2025-09-06T00:40:33.198047Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 6 00:40:33.198657 waagent[1628]: 2025-09-06T00:40:33.198604Z INFO EnvHandler ExtHandler Gateway:None Sep 6 00:40:33.200850 waagent[1628]: 2025-09-06T00:40:33.200719Z INFO EnvHandler ExtHandler Routes:None Sep 6 00:40:33.202379 waagent[1628]: 2025-09-06T00:40:33.202323Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 6 00:40:33.202617 waagent[1628]: 2025-09-06T00:40:33.202558Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 6 00:40:33.203872 waagent[1628]: 2025-09-06T00:40:33.203650Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 6 00:40:33.231925 waagent[1628]: 2025-09-06T00:40:33.231791Z INFO MonitorHandler ExtHandler Network interfaces: Sep 6 00:40:33.231925 waagent[1628]: Executing ['ip', '-a', '-o', 'link']: Sep 6 00:40:33.231925 waagent[1628]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 6 00:40:33.231925 waagent[1628]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:76:d9:a0 brd ff:ff:ff:ff:ff:ff Sep 6 00:40:33.231925 waagent[1628]: 3: enP34889s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:76:d9:a0 brd ff:ff:ff:ff:ff:ff\ altname enP34889p0s2 Sep 6 00:40:33.231925 waagent[1628]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 6 00:40:33.231925 waagent[1628]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 6 00:40:33.231925 waagent[1628]: 2: eth0 inet 10.200.8.17/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 6 00:40:33.231925 waagent[1628]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 6 00:40:33.231925 waagent[1628]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 6 00:40:33.231925 waagent[1628]: 2: eth0 inet6 fe80::7e1e:52ff:fe76:d9a0/64 scope link \ valid_lft forever preferred_lft forever Sep 6 00:40:33.233474 waagent[1628]: 2025-09-06T00:40:33.233410Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 6 00:40:33.235621 waagent[1628]: 2025-09-06T00:40:33.235564Z INFO ExtHandler ExtHandler Downloading agent manifest Sep 6 00:40:33.278570 waagent[1628]: 2025-09-06T00:40:33.278490Z INFO ExtHandler ExtHandler Sep 6 00:40:33.283487 waagent[1628]: 2025-09-06T00:40:33.283394Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 36618888-dbdf-4e5a-93af-fcffc40078c7 correlation cb1b79ee-4428-49aa-a45d-e887bafc10e1 created: 2025-09-06T00:39:10.729462Z] Sep 6 00:40:33.287162 waagent[1628]: 2025-09-06T00:40:33.287039Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 6 00:40:33.292207 waagent[1628]: 2025-09-06T00:40:33.292140Z WARNING EnvHandler ExtHandler The firewall is not configured correctly. The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Will reset it. Current state: Sep 6 00:40:33.292207 waagent[1628]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 00:40:33.292207 waagent[1628]: pkts bytes target prot opt in out source destination Sep 6 00:40:33.292207 waagent[1628]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 6 00:40:33.292207 waagent[1628]: pkts bytes target prot opt in out source destination Sep 6 00:40:33.292207 waagent[1628]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 00:40:33.292207 waagent[1628]: pkts bytes target prot opt in out source destination Sep 6 00:40:33.292207 waagent[1628]: 82 11252 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 6 00:40:33.292207 waagent[1628]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 6 00:40:33.292719 waagent[1628]: 2025-09-06T00:40:33.292660Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 14 ms] Sep 6 00:40:33.323880 waagent[1628]: 2025-09-06T00:40:33.323787Z INFO ExtHandler ExtHandler Looking for existing remote access users. Sep 6 00:40:33.332345 waagent[1628]: 2025-09-06T00:40:33.332211Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 8AC101CC-D456-4D91-919C-6EB27B73B657;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Sep 6 00:40:33.353748 waagent[1628]: 2025-09-06T00:40:33.353630Z INFO EnvHandler ExtHandler The firewall was setup successfully: Sep 6 00:40:33.353748 waagent[1628]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 00:40:33.353748 waagent[1628]: pkts bytes target prot opt in out source destination Sep 6 00:40:33.353748 waagent[1628]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 6 00:40:33.353748 waagent[1628]: pkts bytes target prot opt in out source destination Sep 6 00:40:33.353748 waagent[1628]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 00:40:33.353748 waagent[1628]: pkts bytes target prot opt in out source destination Sep 6 00:40:33.353748 waagent[1628]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 6 00:40:33.353748 waagent[1628]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 6 00:40:33.353748 waagent[1628]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 6 00:40:34.002373 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:40:34.002685 systemd[1]: Stopped kubelet.service. Sep 6 00:40:34.002755 systemd[1]: kubelet.service: Consumed 1.125s CPU time. Sep 6 00:40:34.004906 systemd[1]: Starting kubelet.service... Sep 6 00:40:34.123325 systemd[1]: Started kubelet.service. Sep 6 00:40:34.733888 kubelet[1678]: E0906 00:40:34.733801 1678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:40:34.737183 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:40:34.737367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:40:44.752316 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:40:44.752637 systemd[1]: Stopped kubelet.service. Sep 6 00:40:44.754886 systemd[1]: Starting kubelet.service... Sep 6 00:40:45.100087 systemd[1]: Started kubelet.service. Sep 6 00:40:45.542153 kubelet[1687]: E0906 00:40:45.542080 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:40:45.544228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:40:45.544411 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:40:55.752343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 6 00:40:55.752651 systemd[1]: Stopped kubelet.service. Sep 6 00:40:55.754724 systemd[1]: Starting kubelet.service... Sep 6 00:40:56.108045 systemd[1]: Started kubelet.service. Sep 6 00:40:56.543518 kubelet[1696]: E0906 00:40:56.543447 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:40:56.545486 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:40:56.545674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:40:57.665050 systemd[1]: Created slice system-sshd.slice. Sep 6 00:40:57.667233 systemd[1]: Started sshd@0-10.200.8.17:22-10.200.16.10:39326.service. Sep 6 00:40:58.465034 sshd[1702]: Accepted publickey for core from 10.200.16.10 port 39326 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:40:58.466727 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:40:58.472251 systemd[1]: Started session-3.scope. Sep 6 00:40:58.472777 systemd-logind[1421]: New session 3 of user core. Sep 6 00:40:59.021444 systemd[1]: Started sshd@1-10.200.8.17:22-10.200.16.10:39336.service. Sep 6 00:40:59.654632 sshd[1707]: Accepted publickey for core from 10.200.16.10 port 39336 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:40:59.656291 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:40:59.661659 systemd[1]: Started session-4.scope. Sep 6 00:40:59.662309 systemd-logind[1421]: New session 4 of user core. Sep 6 00:41:00.116895 sshd[1707]: pam_unix(sshd:session): session closed for user core Sep 6 00:41:00.120277 systemd[1]: sshd@1-10.200.8.17:22-10.200.16.10:39336.service: Deactivated successfully. Sep 6 00:41:00.121369 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:41:00.122105 systemd-logind[1421]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:41:00.122949 systemd-logind[1421]: Removed session 4. Sep 6 00:41:00.223397 systemd[1]: Started sshd@2-10.200.8.17:22-10.200.16.10:46566.service. Sep 6 00:41:00.855331 sshd[1713]: Accepted publickey for core from 10.200.16.10 port 46566 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:41:00.857064 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:41:00.862467 systemd[1]: Started session-5.scope. Sep 6 00:41:00.863148 systemd-logind[1421]: New session 5 of user core. Sep 6 00:41:01.306435 sshd[1713]: pam_unix(sshd:session): session closed for user core Sep 6 00:41:01.309795 systemd[1]: sshd@2-10.200.8.17:22-10.200.16.10:46566.service: Deactivated successfully. Sep 6 00:41:01.310790 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:41:01.311465 systemd-logind[1421]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:41:01.312291 systemd-logind[1421]: Removed session 5. Sep 6 00:41:01.413313 systemd[1]: Started sshd@3-10.200.8.17:22-10.200.16.10:46582.service. Sep 6 00:41:01.458955 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 6 00:41:02.046550 sshd[1719]: Accepted publickey for core from 10.200.16.10 port 46582 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:41:02.048316 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:41:02.053769 systemd[1]: Started session-6.scope. Sep 6 00:41:02.054447 systemd-logind[1421]: New session 6 of user core. Sep 6 00:41:02.500174 sshd[1719]: pam_unix(sshd:session): session closed for user core Sep 6 00:41:02.503758 systemd[1]: sshd@3-10.200.8.17:22-10.200.16.10:46582.service: Deactivated successfully. Sep 6 00:41:02.504803 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:41:02.505522 systemd-logind[1421]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:41:02.506379 systemd-logind[1421]: Removed session 6. Sep 6 00:41:02.608307 systemd[1]: Started sshd@4-10.200.8.17:22-10.200.16.10:46592.service. Sep 6 00:41:03.241685 sshd[1725]: Accepted publickey for core from 10.200.16.10 port 46592 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:41:03.243393 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:41:03.248793 systemd[1]: Started session-7.scope. Sep 6 00:41:03.249467 systemd-logind[1421]: New session 7 of user core. Sep 6 00:41:03.789795 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:41:03.790136 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:41:03.818726 systemd[1]: Starting docker.service... Sep 6 00:41:03.870268 env[1738]: time="2025-09-06T00:41:03.870184591Z" level=info msg="Starting up" Sep 6 00:41:03.871899 env[1738]: time="2025-09-06T00:41:03.871864591Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:41:03.872357 env[1738]: time="2025-09-06T00:41:03.872339091Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:41:03.872484 env[1738]: time="2025-09-06T00:41:03.872468491Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:41:03.872545 env[1738]: time="2025-09-06T00:41:03.872536291Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:41:03.874489 env[1738]: time="2025-09-06T00:41:03.874469690Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:41:03.874597 env[1738]: time="2025-09-06T00:41:03.874586690Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:41:03.874654 env[1738]: time="2025-09-06T00:41:03.874642090Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:41:03.874697 env[1738]: time="2025-09-06T00:41:03.874689390Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:41:03.881733 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2938404342-merged.mount: Deactivated successfully. Sep 6 00:41:03.943154 env[1738]: time="2025-09-06T00:41:03.943105289Z" level=info msg="Loading containers: start." Sep 6 00:41:04.105847 kernel: Initializing XFRM netlink socket Sep 6 00:41:04.140353 env[1738]: time="2025-09-06T00:41:04.140291184Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:41:04.226680 systemd-networkd[1585]: docker0: Link UP Sep 6 00:41:04.257178 env[1738]: time="2025-09-06T00:41:04.257119581Z" level=info msg="Loading containers: done." Sep 6 00:41:04.270898 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2055925191-merged.mount: Deactivated successfully. Sep 6 00:41:04.286267 env[1738]: time="2025-09-06T00:41:04.286219980Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:41:04.286531 env[1738]: time="2025-09-06T00:41:04.286503580Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:41:04.286670 env[1738]: time="2025-09-06T00:41:04.286649680Z" level=info msg="Daemon has completed initialization" Sep 6 00:41:04.323348 systemd[1]: Started docker.service. Sep 6 00:41:04.334552 env[1738]: time="2025-09-06T00:41:04.334482979Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:41:05.715788 env[1434]: time="2025-09-06T00:41:05.715731546Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 00:41:06.496858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1469204024.mount: Deactivated successfully. Sep 6 00:41:06.752384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 6 00:41:06.752707 systemd[1]: Stopped kubelet.service. Sep 6 00:41:06.755264 systemd[1]: Starting kubelet.service... Sep 6 00:41:06.935452 systemd[1]: Started kubelet.service. Sep 6 00:41:06.979076 kubelet[1856]: E0906 00:41:06.979008 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:41:06.981208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:41:06.981391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:41:07.547939 update_engine[1422]: I0906 00:41:07.547876 1422 update_attempter.cc:509] Updating boot flags... Sep 6 00:41:09.040411 env[1434]: time="2025-09-06T00:41:09.040332379Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:09.046143 env[1434]: time="2025-09-06T00:41:09.046089779Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:09.051125 env[1434]: time="2025-09-06T00:41:09.051066679Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:09.058186 env[1434]: time="2025-09-06T00:41:09.058125379Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:09.059098 env[1434]: time="2025-09-06T00:41:09.059047378Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 6 00:41:09.060138 env[1434]: time="2025-09-06T00:41:09.060100878Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 00:41:10.882385 env[1434]: time="2025-09-06T00:41:10.882311847Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:10.888260 env[1434]: time="2025-09-06T00:41:10.888209547Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:10.893916 env[1434]: time="2025-09-06T00:41:10.893877747Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:10.897181 env[1434]: time="2025-09-06T00:41:10.897143247Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:10.897925 env[1434]: time="2025-09-06T00:41:10.897886147Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 6 00:41:10.899272 env[1434]: time="2025-09-06T00:41:10.899241347Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 00:41:12.359083 env[1434]: time="2025-09-06T00:41:12.359003525Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:12.365012 env[1434]: time="2025-09-06T00:41:12.364959924Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:12.370159 env[1434]: time="2025-09-06T00:41:12.370111924Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:12.373503 env[1434]: time="2025-09-06T00:41:12.373461524Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:12.374238 env[1434]: time="2025-09-06T00:41:12.374196524Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 6 00:41:12.375246 env[1434]: time="2025-09-06T00:41:12.375215424Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 00:41:13.526238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3733841241.mount: Deactivated successfully. Sep 6 00:41:14.184788 env[1434]: time="2025-09-06T00:41:14.184713899Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:14.189916 env[1434]: time="2025-09-06T00:41:14.189858099Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:14.195753 env[1434]: time="2025-09-06T00:41:14.195702599Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:14.199571 env[1434]: time="2025-09-06T00:41:14.199528299Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:14.199974 env[1434]: time="2025-09-06T00:41:14.199934299Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 6 00:41:14.200922 env[1434]: time="2025-09-06T00:41:14.200891699Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:41:14.857247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3415182348.mount: Deactivated successfully. Sep 6 00:41:16.367304 env[1434]: time="2025-09-06T00:41:16.367224473Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:16.375551 env[1434]: time="2025-09-06T00:41:16.375496873Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:16.380682 env[1434]: time="2025-09-06T00:41:16.380636372Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:16.387001 env[1434]: time="2025-09-06T00:41:16.386956872Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:16.387732 env[1434]: time="2025-09-06T00:41:16.387692072Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 6 00:41:16.388933 env[1434]: time="2025-09-06T00:41:16.388900872Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:41:16.859506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2988639708.mount: Deactivated successfully. Sep 6 00:41:16.885135 env[1434]: time="2025-09-06T00:41:16.885063467Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:16.894274 env[1434]: time="2025-09-06T00:41:16.894224667Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:16.899955 env[1434]: time="2025-09-06T00:41:16.899913467Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:16.905300 env[1434]: time="2025-09-06T00:41:16.905261967Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:16.905827 env[1434]: time="2025-09-06T00:41:16.905782267Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 00:41:16.906785 env[1434]: time="2025-09-06T00:41:16.906751567Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 00:41:17.002484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 6 00:41:17.002777 systemd[1]: Stopped kubelet.service. Sep 6 00:41:17.005113 systemd[1]: Starting kubelet.service... Sep 6 00:41:17.427569 systemd[1]: Started kubelet.service. Sep 6 00:41:17.814679 kubelet[1934]: E0906 00:41:17.814619 1934 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:41:17.816720 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:41:17.816879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:41:18.296484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2685660728.mount: Deactivated successfully. Sep 6 00:41:20.976868 env[1434]: time="2025-09-06T00:41:20.976784397Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:20.985762 env[1434]: time="2025-09-06T00:41:20.985704606Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:20.991924 env[1434]: time="2025-09-06T00:41:20.991877581Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:20.997950 env[1434]: time="2025-09-06T00:41:20.997906055Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:20.998600 env[1434]: time="2025-09-06T00:41:20.998546763Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 6 00:41:23.817577 systemd[1]: Stopped kubelet.service. Sep 6 00:41:23.821245 systemd[1]: Starting kubelet.service... Sep 6 00:41:23.865761 systemd[1]: Reloading. Sep 6 00:41:23.967675 /usr/lib/systemd/system-generators/torcx-generator[1983]: time="2025-09-06T00:41:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:41:23.967721 /usr/lib/systemd/system-generators/torcx-generator[1983]: time="2025-09-06T00:41:23Z" level=info msg="torcx already run" Sep 6 00:41:24.085581 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:41:24.085606 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:41:24.102626 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:41:24.227544 systemd[1]: Started kubelet.service. Sep 6 00:41:24.231215 systemd[1]: Stopping kubelet.service... Sep 6 00:41:24.231799 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:41:24.232083 systemd[1]: Stopped kubelet.service. Sep 6 00:41:24.234018 systemd[1]: Starting kubelet.service... Sep 6 00:41:24.584481 systemd[1]: Started kubelet.service. Sep 6 00:41:24.637554 kubelet[2052]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:41:24.637554 kubelet[2052]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:41:24.637554 kubelet[2052]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:41:24.638553 kubelet[2052]: I0906 00:41:24.638499 2052 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:41:25.003871 kubelet[2052]: I0906 00:41:25.003807 2052 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:41:25.003871 kubelet[2052]: I0906 00:41:25.003863 2052 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:41:25.004259 kubelet[2052]: I0906 00:41:25.004237 2052 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:41:25.439400 kubelet[2052]: E0906 00:41:25.438790 2052 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:41:25.452370 kubelet[2052]: I0906 00:41:25.452027 2052 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:41:25.468056 kubelet[2052]: E0906 00:41:25.467993 2052 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:41:25.468056 kubelet[2052]: I0906 00:41:25.468048 2052 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:41:25.473116 kubelet[2052]: I0906 00:41:25.473088 2052 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:41:25.473281 kubelet[2052]: I0906 00:41:25.473222 2052 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:41:25.473423 kubelet[2052]: I0906 00:41:25.473394 2052 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:41:25.473633 kubelet[2052]: I0906 00:41:25.473421 2052 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-cde0707216","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:41:25.473831 kubelet[2052]: I0906 00:41:25.473650 2052 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:41:25.473831 kubelet[2052]: I0906 00:41:25.473673 2052 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:41:25.473933 kubelet[2052]: I0906 00:41:25.473863 2052 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:41:25.478966 kubelet[2052]: I0906 00:41:25.478942 2052 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:41:25.479083 kubelet[2052]: I0906 00:41:25.478978 2052 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:41:25.479083 kubelet[2052]: I0906 00:41:25.479023 2052 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:41:25.479083 kubelet[2052]: I0906 00:41:25.479045 2052 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:41:25.484515 kubelet[2052]: W0906 00:41:25.484441 2052 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-cde0707216&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Sep 6 00:41:25.484679 kubelet[2052]: E0906 00:41:25.484658 2052 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-cde0707216&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:41:25.485592 kubelet[2052]: W0906 00:41:25.485293 2052 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Sep 6 00:41:25.485592 kubelet[2052]: E0906 00:41:25.485336 2052 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:41:25.485733 kubelet[2052]: I0906 00:41:25.485684 2052 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:41:25.486264 kubelet[2052]: I0906 00:41:25.486242 2052 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:41:25.486344 kubelet[2052]: W0906 00:41:25.486322 2052 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:41:25.496779 kubelet[2052]: I0906 00:41:25.496759 2052 server.go:1274] "Started kubelet" Sep 6 00:41:25.498800 kubelet[2052]: I0906 00:41:25.498769 2052 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:41:25.499188 kubelet[2052]: I0906 00:41:25.499173 2052 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:41:25.499325 kubelet[2052]: I0906 00:41:25.499308 2052 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:41:25.500319 kubelet[2052]: I0906 00:41:25.500302 2052 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:41:25.504295 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:41:25.505356 kubelet[2052]: I0906 00:41:25.504474 2052 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:41:25.513075 kubelet[2052]: I0906 00:41:25.513044 2052 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:41:25.514707 kubelet[2052]: I0906 00:41:25.514688 2052 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:41:25.515164 kubelet[2052]: E0906 00:41:25.515142 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:25.516660 kubelet[2052]: E0906 00:41:25.516574 2052 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-cde0707216?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="200ms" Sep 6 00:41:25.518467 kubelet[2052]: E0906 00:41:25.516698 2052 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-cde0707216.18628aae38053beb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-cde0707216,UID:ci-3510.3.8-n-cde0707216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-cde0707216,},FirstTimestamp:2025-09-06 00:41:25.496724459 +0000 UTC m=+0.904333248,LastTimestamp:2025-09-06 00:41:25.496724459 +0000 UTC m=+0.904333248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-cde0707216,}" Sep 6 00:41:25.519505 kubelet[2052]: I0906 00:41:25.519481 2052 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:41:25.519592 kubelet[2052]: I0906 00:41:25.519548 2052 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:41:25.522018 kubelet[2052]: I0906 00:41:25.521989 2052 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:41:25.522018 kubelet[2052]: I0906 00:41:25.522017 2052 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:41:25.522148 kubelet[2052]: I0906 00:41:25.522092 2052 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:41:25.526954 kubelet[2052]: W0906 00:41:25.526888 2052 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Sep 6 00:41:25.527114 kubelet[2052]: E0906 00:41:25.527091 2052 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:41:25.530986 kubelet[2052]: E0906 00:41:25.530960 2052 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:41:25.579388 kubelet[2052]: I0906 00:41:25.579360 2052 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:41:25.579611 kubelet[2052]: I0906 00:41:25.579587 2052 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:41:25.579718 kubelet[2052]: I0906 00:41:25.579709 2052 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:41:25.585073 kubelet[2052]: I0906 00:41:25.585031 2052 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:41:25.587832 kubelet[2052]: I0906 00:41:25.587794 2052 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:41:25.587971 kubelet[2052]: I0906 00:41:25.587955 2052 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:41:25.588031 kubelet[2052]: I0906 00:41:25.587993 2052 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:41:25.588081 kubelet[2052]: E0906 00:41:25.588054 2052 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:41:25.588914 kubelet[2052]: I0906 00:41:25.588893 2052 policy_none.go:49] "None policy: Start" Sep 6 00:41:25.589392 kubelet[2052]: W0906 00:41:25.589138 2052 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Sep 6 00:41:25.589392 kubelet[2052]: E0906 00:41:25.589209 2052 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:41:25.590054 kubelet[2052]: I0906 00:41:25.590034 2052 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:41:25.590134 kubelet[2052]: I0906 00:41:25.590077 2052 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:41:25.602158 systemd[1]: Created slice kubepods.slice. Sep 6 00:41:25.606833 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 00:41:25.609936 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 00:41:25.615379 kubelet[2052]: E0906 00:41:25.615350 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:25.615645 kubelet[2052]: I0906 00:41:25.615629 2052 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:41:25.615999 kubelet[2052]: I0906 00:41:25.615985 2052 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:41:25.616330 kubelet[2052]: I0906 00:41:25.616283 2052 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:41:25.616663 kubelet[2052]: I0906 00:41:25.616641 2052 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:41:25.620771 kubelet[2052]: E0906 00:41:25.620746 2052 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:25.699289 systemd[1]: Created slice kubepods-burstable-pod605fa4188f5b105934cb1ef12ae4e8ad.slice. Sep 6 00:41:25.709798 systemd[1]: Created slice kubepods-burstable-pod471171024478c7279be1af1e25fc8da0.slice. Sep 6 00:41:25.714344 systemd[1]: Created slice kubepods-burstable-poda2de5d17a1338243a13eaf1ab8ee5ac5.slice. Sep 6 00:41:25.717714 kubelet[2052]: E0906 00:41:25.717518 2052 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-cde0707216?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="400ms" Sep 6 00:41:25.718084 kubelet[2052]: I0906 00:41:25.717898 2052 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-cde0707216" Sep 6 00:41:25.718335 kubelet[2052]: E0906 00:41:25.718311 2052 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-3510.3.8-n-cde0707216" Sep 6 00:41:25.820997 kubelet[2052]: I0906 00:41:25.820934 2052 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a2de5d17a1338243a13eaf1ab8ee5ac5-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-cde0707216\" (UID: \"a2de5d17a1338243a13eaf1ab8ee5ac5\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-cde0707216" Sep 6 00:41:25.821262 kubelet[2052]: I0906 00:41:25.821044 2052 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/605fa4188f5b105934cb1ef12ae4e8ad-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-cde0707216\" (UID: \"605fa4188f5b105934cb1ef12ae4e8ad\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-cde0707216" Sep 6 00:41:25.821262 kubelet[2052]: I0906 00:41:25.821082 2052 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/605fa4188f5b105934cb1ef12ae4e8ad-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-cde0707216\" (UID: \"605fa4188f5b105934cb1ef12ae4e8ad\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-cde0707216" Sep 6 00:41:25.821262 kubelet[2052]: I0906 00:41:25.821105 2052 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/471171024478c7279be1af1e25fc8da0-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-cde0707216\" (UID: \"471171024478c7279be1af1e25fc8da0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-cde0707216" Sep 6 00:41:25.821262 kubelet[2052]: I0906 00:41:25.821132 2052 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/471171024478c7279be1af1e25fc8da0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-cde0707216\" (UID: \"471171024478c7279be1af1e25fc8da0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-cde0707216" Sep 6 00:41:25.821262 kubelet[2052]: I0906 00:41:25.821158 2052 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/605fa4188f5b105934cb1ef12ae4e8ad-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-cde0707216\" (UID: \"605fa4188f5b105934cb1ef12ae4e8ad\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-cde0707216" Sep 6 00:41:25.821419 kubelet[2052]: I0906 00:41:25.821177 2052 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/471171024478c7279be1af1e25fc8da0-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-cde0707216\" (UID: \"471171024478c7279be1af1e25fc8da0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-cde0707216" Sep 6 00:41:25.821419 kubelet[2052]: I0906 00:41:25.821200 2052 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/471171024478c7279be1af1e25fc8da0-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-cde0707216\" (UID: \"471171024478c7279be1af1e25fc8da0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-cde0707216" Sep 6 00:41:25.821419 kubelet[2052]: I0906 00:41:25.821227 2052 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/471171024478c7279be1af1e25fc8da0-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-cde0707216\" (UID: \"471171024478c7279be1af1e25fc8da0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-cde0707216" Sep 6 00:41:25.921061 kubelet[2052]: I0906 00:41:25.921024 2052 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-cde0707216" Sep 6 00:41:25.921785 kubelet[2052]: E0906 00:41:25.921739 2052 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-3510.3.8-n-cde0707216" Sep 6 00:41:26.009633 env[1434]: time="2025-09-06T00:41:26.009576048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-cde0707216,Uid:605fa4188f5b105934cb1ef12ae4e8ad,Namespace:kube-system,Attempt:0,}" Sep 6 00:41:26.013744 env[1434]: time="2025-09-06T00:41:26.013703442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-cde0707216,Uid:471171024478c7279be1af1e25fc8da0,Namespace:kube-system,Attempt:0,}" Sep 6 00:41:26.017561 env[1434]: time="2025-09-06T00:41:26.017523707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-cde0707216,Uid:a2de5d17a1338243a13eaf1ab8ee5ac5,Namespace:kube-system,Attempt:0,}" Sep 6 00:41:26.118737 kubelet[2052]: E0906 00:41:26.118668 2052 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-cde0707216?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="800ms" Sep 6 00:41:26.324534 kubelet[2052]: I0906 00:41:26.324041 2052 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-cde0707216" Sep 6 00:41:26.324534 kubelet[2052]: E0906 00:41:26.324426 2052 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-3510.3.8-n-cde0707216" Sep 6 00:41:26.487381 kubelet[2052]: W0906 00:41:26.487326 2052 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Sep 6 00:41:26.487381 kubelet[2052]: E0906 00:41:26.487392 2052 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:41:26.589469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3793884085.mount: Deactivated successfully. Sep 6 00:41:26.616295 env[1434]: time="2025-09-06T00:41:26.616232933Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:26.619648 env[1434]: time="2025-09-06T00:41:26.619599955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:26.630003 env[1434]: time="2025-09-06T00:41:26.629961743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:26.636603 env[1434]: time="2025-09-06T00:41:26.636548272Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:26.640149 env[1434]: time="2025-09-06T00:41:26.640107611Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:26.644590 env[1434]: time="2025-09-06T00:41:26.644549735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:26.652624 env[1434]: time="2025-09-06T00:41:26.652585102Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:26.658186 env[1434]: time="2025-09-06T00:41:26.658145132Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:26.666658 env[1434]: time="2025-09-06T00:41:26.666618441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:26.671409 env[1434]: time="2025-09-06T00:41:26.671368994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:26.675571 env[1434]: time="2025-09-06T00:41:26.675532791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:26.680287 env[1434]: time="2025-09-06T00:41:26.680250342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:26.757899 env[1434]: time="2025-09-06T00:41:26.756886854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:41:26.757899 env[1434]: time="2025-09-06T00:41:26.756944459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:41:26.757899 env[1434]: time="2025-09-06T00:41:26.756960361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:41:26.757899 env[1434]: time="2025-09-06T00:41:26.757152079Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/65c6e68cf15c4f00e196393b0ccb92e1cf4ad8cb5b920d868b43df039c66f582 pid=2092 runtime=io.containerd.runc.v2 Sep 6 00:41:26.780669 systemd[1]: Started cri-containerd-65c6e68cf15c4f00e196393b0ccb92e1cf4ad8cb5b920d868b43df039c66f582.scope. Sep 6 00:41:26.796848 env[1434]: time="2025-09-06T00:41:26.796747857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:41:26.797138 env[1434]: time="2025-09-06T00:41:26.796862768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:41:26.797138 env[1434]: time="2025-09-06T00:41:26.796894771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:41:26.797138 env[1434]: time="2025-09-06T00:41:26.797064788Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d2880b758be849d98cbd78fea1ebca4fa3e9706f48465e811d01e46439f2241 pid=2125 runtime=io.containerd.runc.v2 Sep 6 00:41:26.800879 env[1434]: time="2025-09-06T00:41:26.800012869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:41:26.800879 env[1434]: time="2025-09-06T00:41:26.800180985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:41:26.800879 env[1434]: time="2025-09-06T00:41:26.800308197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:41:26.800879 env[1434]: time="2025-09-06T00:41:26.800570522Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/76bb9ab329f06c42708a66d6dcef4e187492987e5ad942897553eec79877171f pid=2121 runtime=io.containerd.runc.v2 Sep 6 00:41:26.803544 kubelet[2052]: W0906 00:41:26.803404 2052 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-cde0707216&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Sep 6 00:41:26.803544 kubelet[2052]: E0906 00:41:26.803504 2052 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-cde0707216&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:41:26.825226 systemd[1]: Started cri-containerd-76bb9ab329f06c42708a66d6dcef4e187492987e5ad942897553eec79877171f.scope. Sep 6 00:41:26.836148 systemd[1]: Started cri-containerd-9d2880b758be849d98cbd78fea1ebca4fa3e9706f48465e811d01e46439f2241.scope. Sep 6 00:41:26.890921 env[1434]: time="2025-09-06T00:41:26.890772029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-cde0707216,Uid:a2de5d17a1338243a13eaf1ab8ee5ac5,Namespace:kube-system,Attempt:0,} returns sandbox id \"65c6e68cf15c4f00e196393b0ccb92e1cf4ad8cb5b920d868b43df039c66f582\"" Sep 6 00:41:26.900846 env[1434]: time="2025-09-06T00:41:26.900759182Z" level=info msg="CreateContainer within sandbox \"65c6e68cf15c4f00e196393b0ccb92e1cf4ad8cb5b920d868b43df039c66f582\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:41:26.919340 kubelet[2052]: E0906 00:41:26.919269 2052 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-cde0707216?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="1.6s" Sep 6 00:41:26.942210 kubelet[2052]: W0906 00:41:26.942047 2052 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Sep 6 00:41:26.942210 kubelet[2052]: E0906 00:41:26.942159 2052 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:41:26.947386 env[1434]: time="2025-09-06T00:41:26.947317524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-cde0707216,Uid:471171024478c7279be1af1e25fc8da0,Namespace:kube-system,Attempt:0,} returns sandbox id \"76bb9ab329f06c42708a66d6dcef4e187492987e5ad942897553eec79877171f\"" Sep 6 00:41:26.950989 env[1434]: time="2025-09-06T00:41:26.950941970Z" level=info msg="CreateContainer within sandbox \"76bb9ab329f06c42708a66d6dcef4e187492987e5ad942897553eec79877171f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:41:26.957261 env[1434]: time="2025-09-06T00:41:26.957222969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-cde0707216,Uid:605fa4188f5b105934cb1ef12ae4e8ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d2880b758be849d98cbd78fea1ebca4fa3e9706f48465e811d01e46439f2241\"" Sep 6 00:41:26.959627 env[1434]: time="2025-09-06T00:41:26.959590695Z" level=info msg="CreateContainer within sandbox \"9d2880b758be849d98cbd78fea1ebca4fa3e9706f48465e811d01e46439f2241\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:41:26.994632 kubelet[2052]: W0906 00:41:26.994581 2052 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Sep 6 00:41:26.994876 kubelet[2052]: E0906 00:41:26.994651 2052 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:41:27.127250 kubelet[2052]: I0906 00:41:27.126759 2052 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-cde0707216" Sep 6 00:41:27.127250 kubelet[2052]: E0906 00:41:27.127208 2052 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-3510.3.8-n-cde0707216" Sep 6 00:41:27.604503 kubelet[2052]: E0906 00:41:27.604455 2052 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:41:28.632802 kubelet[2052]: W0906 00:41:28.265658 2052 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Sep 6 00:41:28.632802 kubelet[2052]: E0906 00:41:28.265709 2052 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:41:28.632802 kubelet[2052]: E0906 00:41:28.520154 2052 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-cde0707216?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="3.2s" Sep 6 00:41:28.672203 env[1434]: time="2025-09-06T00:41:28.672113562Z" level=info msg="CreateContainer within sandbox \"65c6e68cf15c4f00e196393b0ccb92e1cf4ad8cb5b920d868b43df039c66f582\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"141616e5f58f41a7d7ec86fcb1009c6305a97441a3cd82542089580db3ec1f70\"" Sep 6 00:41:28.673415 env[1434]: time="2025-09-06T00:41:28.673371176Z" level=info msg="StartContainer for \"141616e5f58f41a7d7ec86fcb1009c6305a97441a3cd82542089580db3ec1f70\"" Sep 6 00:41:28.704292 systemd[1]: run-containerd-runc-k8s.io-141616e5f58f41a7d7ec86fcb1009c6305a97441a3cd82542089580db3ec1f70-runc.eBnaiB.mount: Deactivated successfully. Sep 6 00:41:28.709523 systemd[1]: Started cri-containerd-141616e5f58f41a7d7ec86fcb1009c6305a97441a3cd82542089580db3ec1f70.scope. Sep 6 00:41:28.730449 kubelet[2052]: I0906 00:41:28.729871 2052 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-cde0707216" Sep 6 00:41:28.736085 kubelet[2052]: E0906 00:41:28.736031 2052 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-3510.3.8-n-cde0707216" Sep 6 00:41:29.344338 kubelet[2052]: W0906 00:41:29.344238 2052 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-cde0707216&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Sep 6 00:41:29.344739 kubelet[2052]: E0906 00:41:29.344706 2052 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-cde0707216&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:41:29.624686 kubelet[2052]: W0906 00:41:29.624487 2052 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Sep 6 00:41:29.624686 kubelet[2052]: E0906 00:41:29.624602 2052 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:41:30.129101 kubelet[2052]: W0906 00:41:30.129016 2052 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Sep 6 00:41:30.129101 kubelet[2052]: E0906 00:41:30.129109 2052 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:41:30.730958 env[1434]: time="2025-09-06T00:41:30.730875415Z" level=info msg="StartContainer for \"141616e5f58f41a7d7ec86fcb1009c6305a97441a3cd82542089580db3ec1f70\" returns successfully" Sep 6 00:41:30.766894 env[1434]: time="2025-09-06T00:41:30.766810287Z" level=info msg="CreateContainer within sandbox \"76bb9ab329f06c42708a66d6dcef4e187492987e5ad942897553eec79877171f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aa4eb0c6a133959eeb7b9ce8d8bf58b13a764ac437c5ca6733c79c0a7286d648\"" Sep 6 00:41:30.767479 env[1434]: time="2025-09-06T00:41:30.767442341Z" level=info msg="StartContainer for \"aa4eb0c6a133959eeb7b9ce8d8bf58b13a764ac437c5ca6733c79c0a7286d648\"" Sep 6 00:41:30.770747 env[1434]: time="2025-09-06T00:41:30.770684618Z" level=info msg="CreateContainer within sandbox \"9d2880b758be849d98cbd78fea1ebca4fa3e9706f48465e811d01e46439f2241\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"819d011d6714f20de91344691d99d319cb5d11aea24fc24f4e5bdd01ccf2e9c4\"" Sep 6 00:41:30.771632 env[1434]: time="2025-09-06T00:41:30.771602297Z" level=info msg="StartContainer for \"819d011d6714f20de91344691d99d319cb5d11aea24fc24f4e5bdd01ccf2e9c4\"" Sep 6 00:41:30.798067 systemd[1]: Started cri-containerd-aa4eb0c6a133959eeb7b9ce8d8bf58b13a764ac437c5ca6733c79c0a7286d648.scope. Sep 6 00:41:30.811556 systemd[1]: Started cri-containerd-819d011d6714f20de91344691d99d319cb5d11aea24fc24f4e5bdd01ccf2e9c4.scope. Sep 6 00:41:30.885047 env[1434]: time="2025-09-06T00:41:30.884983790Z" level=info msg="StartContainer for \"aa4eb0c6a133959eeb7b9ce8d8bf58b13a764ac437c5ca6733c79c0a7286d648\" returns successfully" Sep 6 00:41:30.895907 env[1434]: time="2025-09-06T00:41:30.895852919Z" level=info msg="StartContainer for \"819d011d6714f20de91344691d99d319cb5d11aea24fc24f4e5bdd01ccf2e9c4\" returns successfully" Sep 6 00:41:31.939470 kubelet[2052]: I0906 00:41:31.939413 2052 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-cde0707216" Sep 6 00:41:32.698661 kubelet[2052]: E0906 00:41:32.698601 2052 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-cde0707216\" not found" node="ci-3510.3.8-n-cde0707216" Sep 6 00:41:32.875174 kubelet[2052]: I0906 00:41:32.875120 2052 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-cde0707216" Sep 6 00:41:32.875498 kubelet[2052]: E0906 00:41:32.875473 2052 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-cde0707216\": node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:32.998340 kubelet[2052]: E0906 00:41:32.998181 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:33.098406 kubelet[2052]: E0906 00:41:33.098350 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:33.198987 kubelet[2052]: E0906 00:41:33.198942 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:33.300119 kubelet[2052]: E0906 00:41:33.300060 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:33.400874 kubelet[2052]: E0906 00:41:33.400785 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:33.501355 kubelet[2052]: E0906 00:41:33.501289 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:33.602253 kubelet[2052]: E0906 00:41:33.602056 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:33.702793 kubelet[2052]: E0906 00:41:33.702732 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:33.803845 kubelet[2052]: E0906 00:41:33.803786 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:33.904636 kubelet[2052]: E0906 00:41:33.904469 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:34.005949 kubelet[2052]: E0906 00:41:34.005889 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:34.106611 kubelet[2052]: E0906 00:41:34.106554 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:34.207554 kubelet[2052]: E0906 00:41:34.207400 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:34.308043 kubelet[2052]: E0906 00:41:34.307966 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:34.408868 kubelet[2052]: E0906 00:41:34.408782 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:34.509318 kubelet[2052]: E0906 00:41:34.509270 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:34.610169 kubelet[2052]: E0906 00:41:34.610107 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:34.711094 kubelet[2052]: E0906 00:41:34.711032 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:34.811509 kubelet[2052]: E0906 00:41:34.811345 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:34.889611 systemd[1]: Reloading. Sep 6 00:41:34.912065 kubelet[2052]: E0906 00:41:34.912016 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:35.000866 /usr/lib/systemd/system-generators/torcx-generator[2359]: time="2025-09-06T00:41:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:41:35.000910 /usr/lib/systemd/system-generators/torcx-generator[2359]: time="2025-09-06T00:41:35Z" level=info msg="torcx already run" Sep 6 00:41:35.012213 kubelet[2052]: E0906 00:41:35.012160 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:35.091918 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:41:35.091945 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:41:35.109562 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:41:35.112800 kubelet[2052]: E0906 00:41:35.112741 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:35.216019 kubelet[2052]: E0906 00:41:35.215941 2052 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-cde0707216\" not found" Sep 6 00:41:35.227528 systemd[1]: Stopping kubelet.service... Sep 6 00:41:35.246493 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:41:35.246766 systemd[1]: Stopped kubelet.service. Sep 6 00:41:35.249305 systemd[1]: Starting kubelet.service... Sep 6 00:41:35.513101 systemd[1]: Started kubelet.service. Sep 6 00:41:35.585897 kubelet[2415]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:41:35.585897 kubelet[2415]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:41:35.585897 kubelet[2415]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:41:35.586512 kubelet[2415]: I0906 00:41:35.586004 2415 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:41:35.593431 kubelet[2415]: I0906 00:41:35.593386 2415 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:41:35.593431 kubelet[2415]: I0906 00:41:35.593420 2415 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:41:35.593782 kubelet[2415]: I0906 00:41:35.593759 2415 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:41:35.596030 kubelet[2415]: I0906 00:41:35.595311 2415 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:41:35.599220 kubelet[2415]: I0906 00:41:35.597748 2415 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:41:35.605367 kubelet[2415]: E0906 00:41:35.605312 2415 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:41:35.605367 kubelet[2415]: I0906 00:41:35.605361 2415 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:41:35.610424 kubelet[2415]: I0906 00:41:35.610397 2415 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:41:35.610730 kubelet[2415]: I0906 00:41:35.610714 2415 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:41:35.612648 kubelet[2415]: I0906 00:41:35.612596 2415 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:41:35.613260 kubelet[2415]: I0906 00:41:35.612773 2415 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-cde0707216","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:41:35.613473 kubelet[2415]: I0906 00:41:35.613460 2415 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:41:35.613548 kubelet[2415]: I0906 00:41:35.613539 2415 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:41:35.613655 kubelet[2415]: I0906 00:41:35.613646 2415 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:41:35.613870 kubelet[2415]: I0906 00:41:35.613858 2415 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:41:35.613948 kubelet[2415]: I0906 00:41:35.613939 2415 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:41:35.614051 kubelet[2415]: I0906 00:41:35.614043 2415 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:41:35.614121 kubelet[2415]: I0906 00:41:35.614112 2415 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:41:35.627210 kubelet[2415]: I0906 00:41:35.627154 2415 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:41:35.627757 kubelet[2415]: I0906 00:41:35.627729 2415 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:41:35.628351 kubelet[2415]: I0906 00:41:35.628323 2415 server.go:1274] "Started kubelet" Sep 6 00:41:35.632266 kubelet[2415]: I0906 00:41:35.632214 2415 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:41:35.635211 kubelet[2415]: I0906 00:41:35.635183 2415 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:41:35.635845 kubelet[2415]: I0906 00:41:35.635808 2415 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:41:35.637997 kubelet[2415]: I0906 00:41:35.637954 2415 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:41:35.638323 kubelet[2415]: I0906 00:41:35.638305 2415 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:41:35.643626 kubelet[2415]: I0906 00:41:35.642727 2415 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:41:35.646055 kubelet[2415]: I0906 00:41:35.646036 2415 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:41:35.647287 kubelet[2415]: I0906 00:41:35.647268 2415 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:41:35.648102 kubelet[2415]: I0906 00:41:35.648086 2415 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:41:35.651964 kubelet[2415]: I0906 00:41:35.651918 2415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:41:35.652944 kubelet[2415]: I0906 00:41:35.652917 2415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:41:35.653039 kubelet[2415]: I0906 00:41:35.652950 2415 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:41:35.653039 kubelet[2415]: I0906 00:41:35.652974 2415 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:41:35.653127 kubelet[2415]: E0906 00:41:35.653033 2415 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:41:35.661728 kubelet[2415]: I0906 00:41:35.661531 2415 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:41:35.661894 kubelet[2415]: I0906 00:41:35.661743 2415 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:41:35.664792 kubelet[2415]: E0906 00:41:35.664757 2415 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:41:35.667090 kubelet[2415]: I0906 00:41:35.665240 2415 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:41:35.978379 kubelet[2415]: E0906 00:41:35.978219 2415 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:41:35.989806 kubelet[2415]: I0906 00:41:35.989765 2415 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:41:35.989806 kubelet[2415]: I0906 00:41:35.989791 2415 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:41:35.990091 kubelet[2415]: I0906 00:41:35.989842 2415 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:41:35.990091 kubelet[2415]: I0906 00:41:35.990068 2415 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:41:35.990183 kubelet[2415]: I0906 00:41:35.990085 2415 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:41:35.990183 kubelet[2415]: I0906 00:41:35.990115 2415 policy_none.go:49] "None policy: Start" Sep 6 00:41:35.991027 kubelet[2415]: I0906 00:41:35.991003 2415 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:41:35.991178 kubelet[2415]: I0906 00:41:35.991154 2415 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:41:35.991361 kubelet[2415]: I0906 00:41:35.991345 2415 state_mem.go:75] "Updated machine memory state" Sep 6 00:41:35.995949 kubelet[2415]: I0906 00:41:35.995926 2415 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:41:35.996341 kubelet[2415]: I0906 00:41:35.996320 2415 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:41:35.996484 kubelet[2415]: I0906 00:41:35.996443 2415 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:41:35.997136 kubelet[2415]: I0906 00:41:35.997117 2415 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:41:36.023342 sudo[2447]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:41:36.023660 sudo[2447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 00:41:36.105645 kubelet[2415]: I0906 00:41:36.105602 2415 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-cde0707216" Sep 6 00:41:36.120941 kubelet[2415]: I0906 00:41:36.120889 2415 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-n-cde0707216" Sep 6 00:41:36.121151 kubelet[2415]: I0906 00:41:36.121016 2415 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-cde0707216" Sep 6 00:41:36.202663 kubelet[2415]: W0906 00:41:36.202612 2415 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:41:36.207709 kubelet[2415]: W0906 00:41:36.207667 2415 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:41:36.207919 kubelet[2415]: W0906 00:41:36.207902 2415 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:41:36.278654 kubelet[2415]: I0906 00:41:36.278598 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/471171024478c7279be1af1e25fc8da0-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-cde0707216\" (UID: \"471171024478c7279be1af1e25fc8da0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-cde0707216" Sep 6 00:41:36.278906 kubelet[2415]: I0906 00:41:36.278672 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/471171024478c7279be1af1e25fc8da0-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-cde0707216\" (UID: \"471171024478c7279be1af1e25fc8da0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-cde0707216" Sep 6 00:41:36.278906 kubelet[2415]: I0906 00:41:36.278736 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/471171024478c7279be1af1e25fc8da0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-cde0707216\" (UID: \"471171024478c7279be1af1e25fc8da0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-cde0707216" Sep 6 00:41:36.278906 kubelet[2415]: I0906 00:41:36.278764 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/605fa4188f5b105934cb1ef12ae4e8ad-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-cde0707216\" (UID: \"605fa4188f5b105934cb1ef12ae4e8ad\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-cde0707216" Sep 6 00:41:36.278906 kubelet[2415]: I0906 00:41:36.278840 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/605fa4188f5b105934cb1ef12ae4e8ad-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-cde0707216\" (UID: \"605fa4188f5b105934cb1ef12ae4e8ad\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-cde0707216" Sep 6 00:41:36.278906 kubelet[2415]: I0906 00:41:36.278865 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/471171024478c7279be1af1e25fc8da0-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-cde0707216\" (UID: \"471171024478c7279be1af1e25fc8da0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-cde0707216" Sep 6 00:41:36.279141 kubelet[2415]: I0906 00:41:36.278899 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/605fa4188f5b105934cb1ef12ae4e8ad-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-cde0707216\" (UID: \"605fa4188f5b105934cb1ef12ae4e8ad\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-cde0707216" Sep 6 00:41:36.279141 kubelet[2415]: I0906 00:41:36.278924 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/471171024478c7279be1af1e25fc8da0-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-cde0707216\" (UID: \"471171024478c7279be1af1e25fc8da0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-cde0707216" Sep 6 00:41:36.279141 kubelet[2415]: I0906 00:41:36.278964 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a2de5d17a1338243a13eaf1ab8ee5ac5-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-cde0707216\" (UID: \"a2de5d17a1338243a13eaf1ab8ee5ac5\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-cde0707216" Sep 6 00:41:36.628044 sudo[2447]: pam_unix(sudo:session): session closed for user root Sep 6 00:41:36.628562 kubelet[2415]: I0906 00:41:36.628530 2415 apiserver.go:52] "Watching apiserver" Sep 6 00:41:36.647686 kubelet[2415]: I0906 00:41:36.647637 2415 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:41:36.769915 kubelet[2415]: I0906 00:41:36.769795 2415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-cde0707216" podStartSLOduration=0.769767757 podStartE2EDuration="769.767757ms" podCreationTimestamp="2025-09-06 00:41:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:41:36.748159184 +0000 UTC m=+1.228795473" watchObservedRunningTime="2025-09-06 00:41:36.769767757 +0000 UTC m=+1.250403946" Sep 6 00:41:36.785328 kubelet[2415]: I0906 00:41:36.785241 2415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-cde0707216" podStartSLOduration=0.785214081 podStartE2EDuration="785.214081ms" podCreationTimestamp="2025-09-06 00:41:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:41:36.771059251 +0000 UTC m=+1.251695440" watchObservedRunningTime="2025-09-06 00:41:36.785214081 +0000 UTC m=+1.265850270" Sep 6 00:41:36.804800 kubelet[2415]: I0906 00:41:36.804721 2415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-cde0707216" podStartSLOduration=0.804633194 podStartE2EDuration="804.633194ms" podCreationTimestamp="2025-09-06 00:41:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:41:36.786527476 +0000 UTC m=+1.267163765" watchObservedRunningTime="2025-09-06 00:41:36.804633194 +0000 UTC m=+1.285269483" Sep 6 00:41:38.418498 sudo[1728]: pam_unix(sudo:session): session closed for user root Sep 6 00:41:38.523018 sshd[1725]: pam_unix(sshd:session): session closed for user core Sep 6 00:41:38.527103 systemd[1]: sshd@4-10.200.8.17:22-10.200.16.10:46592.service: Deactivated successfully. Sep 6 00:41:38.528167 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:41:38.528388 systemd[1]: session-7.scope: Consumed 4.464s CPU time. Sep 6 00:41:38.529419 systemd-logind[1421]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:41:38.530779 systemd-logind[1421]: Removed session 7. Sep 6 00:41:40.697890 kubelet[2415]: I0906 00:41:40.697844 2415 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:41:40.698538 env[1434]: time="2025-09-06T00:41:40.698436822Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:41:40.699678 kubelet[2415]: I0906 00:41:40.699643 2415 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:41:41.751603 systemd[1]: Created slice kubepods-burstable-pode38beb11_f548_4cc3_86fb_1edec83f3295.slice. Sep 6 00:41:41.768323 systemd[1]: Created slice kubepods-besteffort-pod6e48e5da_8da4_4b0d_ba64_dfc91f2e1946.slice. Sep 6 00:41:41.817067 kubelet[2415]: I0906 00:41:41.817001 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e38beb11-f548-4cc3-86fb-1edec83f3295-cilium-config-path\") pod \"cilium-fnpsf\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " pod="kube-system/cilium-fnpsf" Sep 6 00:41:41.817067 kubelet[2415]: I0906 00:41:41.817058 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-lib-modules\") pod \"cilium-fnpsf\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " pod="kube-system/cilium-fnpsf" Sep 6 00:41:41.817747 kubelet[2415]: I0906 00:41:41.817084 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e48e5da-8da4-4b0d-ba64-dfc91f2e1946-xtables-lock\") pod \"kube-proxy-snsss\" (UID: \"6e48e5da-8da4-4b0d-ba64-dfc91f2e1946\") " pod="kube-system/kube-proxy-snsss" Sep 6 00:41:41.817747 kubelet[2415]: I0906 00:41:41.817109 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e38beb11-f548-4cc3-86fb-1edec83f3295-hubble-tls\") pod \"cilium-fnpsf\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " pod="kube-system/cilium-fnpsf" Sep 6 00:41:41.817747 kubelet[2415]: I0906 00:41:41.817132 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e48e5da-8da4-4b0d-ba64-dfc91f2e1946-kube-proxy\") pod \"kube-proxy-snsss\" (UID: \"6e48e5da-8da4-4b0d-ba64-dfc91f2e1946\") " pod="kube-system/kube-proxy-snsss" Sep 6 00:41:41.817747 kubelet[2415]: I0906 00:41:41.817153 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-host-proc-sys-net\") pod \"cilium-fnpsf\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " pod="kube-system/cilium-fnpsf" Sep 6 00:41:41.817747 kubelet[2415]: I0906 00:41:41.817176 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jbgv\" (UniqueName: \"kubernetes.io/projected/e38beb11-f548-4cc3-86fb-1edec83f3295-kube-api-access-7jbgv\") pod \"cilium-fnpsf\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " pod="kube-system/cilium-fnpsf" Sep 6 00:41:41.817920 kubelet[2415]: I0906 00:41:41.817202 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvgzq\" (UniqueName: \"kubernetes.io/projected/6e48e5da-8da4-4b0d-ba64-dfc91f2e1946-kube-api-access-bvgzq\") pod \"kube-proxy-snsss\" (UID: \"6e48e5da-8da4-4b0d-ba64-dfc91f2e1946\") " pod="kube-system/kube-proxy-snsss" Sep 6 00:41:41.817920 kubelet[2415]: I0906 00:41:41.817228 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-cilium-run\") pod \"cilium-fnpsf\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " pod="kube-system/cilium-fnpsf" Sep 6 00:41:41.817920 kubelet[2415]: I0906 00:41:41.817251 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-cilium-cgroup\") pod \"cilium-fnpsf\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " pod="kube-system/cilium-fnpsf" Sep 6 00:41:41.817920 kubelet[2415]: I0906 00:41:41.817279 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-etc-cni-netd\") pod \"cilium-fnpsf\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " pod="kube-system/cilium-fnpsf" Sep 6 00:41:41.817920 kubelet[2415]: I0906 00:41:41.817308 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e38beb11-f548-4cc3-86fb-1edec83f3295-clustermesh-secrets\") pod \"cilium-fnpsf\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " pod="kube-system/cilium-fnpsf" Sep 6 00:41:41.817920 kubelet[2415]: I0906 00:41:41.817333 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-hostproc\") pod \"cilium-fnpsf\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " pod="kube-system/cilium-fnpsf" Sep 6 00:41:41.818072 kubelet[2415]: I0906 00:41:41.817356 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-cni-path\") pod \"cilium-fnpsf\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " pod="kube-system/cilium-fnpsf" Sep 6 00:41:41.818072 kubelet[2415]: I0906 00:41:41.817378 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e48e5da-8da4-4b0d-ba64-dfc91f2e1946-lib-modules\") pod \"kube-proxy-snsss\" (UID: \"6e48e5da-8da4-4b0d-ba64-dfc91f2e1946\") " pod="kube-system/kube-proxy-snsss" Sep 6 00:41:41.818072 kubelet[2415]: I0906 00:41:41.817418 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-bpf-maps\") pod \"cilium-fnpsf\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " pod="kube-system/cilium-fnpsf" Sep 6 00:41:41.818072 kubelet[2415]: I0906 00:41:41.817442 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-xtables-lock\") pod \"cilium-fnpsf\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " pod="kube-system/cilium-fnpsf" Sep 6 00:41:41.818072 kubelet[2415]: I0906 00:41:41.817473 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-host-proc-sys-kernel\") pod \"cilium-fnpsf\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " pod="kube-system/cilium-fnpsf" Sep 6 00:41:41.881528 systemd[1]: Created slice kubepods-besteffort-pod85cd2c7a_e29b_4d09_8a1c_b2581be481ea.slice. Sep 6 00:41:41.918048 kubelet[2415]: I0906 00:41:41.917999 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85cd2c7a-e29b-4d09-8a1c-b2581be481ea-cilium-config-path\") pod \"cilium-operator-5d85765b45-vd7vm\" (UID: \"85cd2c7a-e29b-4d09-8a1c-b2581be481ea\") " pod="kube-system/cilium-operator-5d85765b45-vd7vm" Sep 6 00:41:41.919089 kubelet[2415]: I0906 00:41:41.919041 2415 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:41:41.919430 kubelet[2415]: I0906 00:41:41.919405 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krjd5\" (UniqueName: \"kubernetes.io/projected/85cd2c7a-e29b-4d09-8a1c-b2581be481ea-kube-api-access-krjd5\") pod \"cilium-operator-5d85765b45-vd7vm\" (UID: \"85cd2c7a-e29b-4d09-8a1c-b2581be481ea\") " pod="kube-system/cilium-operator-5d85765b45-vd7vm" Sep 6 00:41:42.056468 env[1434]: time="2025-09-06T00:41:42.056314787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fnpsf,Uid:e38beb11-f548-4cc3-86fb-1edec83f3295,Namespace:kube-system,Attempt:0,}" Sep 6 00:41:42.077843 env[1434]: time="2025-09-06T00:41:42.077772324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-snsss,Uid:6e48e5da-8da4-4b0d-ba64-dfc91f2e1946,Namespace:kube-system,Attempt:0,}" Sep 6 00:41:42.112179 env[1434]: time="2025-09-06T00:41:42.106914438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:41:42.112179 env[1434]: time="2025-09-06T00:41:42.106959041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:41:42.112179 env[1434]: time="2025-09-06T00:41:42.106969642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:41:42.112179 env[1434]: time="2025-09-06T00:41:42.107135352Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e pid=2496 runtime=io.containerd.runc.v2 Sep 6 00:41:42.132918 env[1434]: time="2025-09-06T00:41:42.131018839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:41:42.132918 env[1434]: time="2025-09-06T00:41:42.131103845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:41:42.132918 env[1434]: time="2025-09-06T00:41:42.131132546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:41:42.132918 env[1434]: time="2025-09-06T00:41:42.131283056Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4a109722118041571daf69eb4e8db584381bf3cb66820b0bb715658cd6a3c95 pid=2524 runtime=io.containerd.runc.v2 Sep 6 00:41:42.132400 systemd[1]: Started cri-containerd-aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e.scope. Sep 6 00:41:42.157118 systemd[1]: Started cri-containerd-d4a109722118041571daf69eb4e8db584381bf3cb66820b0bb715658cd6a3c95.scope. Sep 6 00:41:42.187852 env[1434]: time="2025-09-06T00:41:42.187054529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vd7vm,Uid:85cd2c7a-e29b-4d09-8a1c-b2581be481ea,Namespace:kube-system,Attempt:0,}" Sep 6 00:41:42.203875 env[1434]: time="2025-09-06T00:41:42.203773870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fnpsf,Uid:e38beb11-f548-4cc3-86fb-1edec83f3295,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e\"" Sep 6 00:41:42.207984 env[1434]: time="2025-09-06T00:41:42.206660550Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:41:42.217262 env[1434]: time="2025-09-06T00:41:42.217202306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-snsss,Uid:6e48e5da-8da4-4b0d-ba64-dfc91f2e1946,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4a109722118041571daf69eb4e8db584381bf3cb66820b0bb715658cd6a3c95\"" Sep 6 00:41:42.223185 env[1434]: time="2025-09-06T00:41:42.223142776Z" level=info msg="CreateContainer within sandbox \"d4a109722118041571daf69eb4e8db584381bf3cb66820b0bb715658cd6a3c95\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:41:42.242943 env[1434]: time="2025-09-06T00:41:42.242833402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:41:42.243227 env[1434]: time="2025-09-06T00:41:42.242900306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:41:42.243227 env[1434]: time="2025-09-06T00:41:42.242914907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:41:42.249042 env[1434]: time="2025-09-06T00:41:42.247959621Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3009c33596fa996c3ff2426967b4463cf4b35a96ba46f4b946ab7c2e9fb6206d pid=2583 runtime=io.containerd.runc.v2 Sep 6 00:41:42.273267 systemd[1]: Started cri-containerd-3009c33596fa996c3ff2426967b4463cf4b35a96ba46f4b946ab7c2e9fb6206d.scope. Sep 6 00:41:42.287562 env[1434]: time="2025-09-06T00:41:42.287490683Z" level=info msg="CreateContainer within sandbox \"d4a109722118041571daf69eb4e8db584381bf3cb66820b0bb715658cd6a3c95\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7faf6e92f12ce3a4cc69413a98ce000d46bfd029d99b0f01192583b34f512e2f\"" Sep 6 00:41:42.290379 env[1434]: time="2025-09-06T00:41:42.288769663Z" level=info msg="StartContainer for \"7faf6e92f12ce3a4cc69413a98ce000d46bfd029d99b0f01192583b34f512e2f\"" Sep 6 00:41:42.315746 systemd[1]: Started cri-containerd-7faf6e92f12ce3a4cc69413a98ce000d46bfd029d99b0f01192583b34f512e2f.scope. Sep 6 00:41:42.355872 env[1434]: time="2025-09-06T00:41:42.355795836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vd7vm,Uid:85cd2c7a-e29b-4d09-8a1c-b2581be481ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"3009c33596fa996c3ff2426967b4463cf4b35a96ba46f4b946ab7c2e9fb6206d\"" Sep 6 00:41:42.383599 env[1434]: time="2025-09-06T00:41:42.383532664Z" level=info msg="StartContainer for \"7faf6e92f12ce3a4cc69413a98ce000d46bfd029d99b0f01192583b34f512e2f\" returns successfully" Sep 6 00:41:42.753196 kubelet[2415]: I0906 00:41:42.753122 2415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-snsss" podStartSLOduration=1.753086176 podStartE2EDuration="1.753086176s" podCreationTimestamp="2025-09-06 00:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:41:42.737855528 +0000 UTC m=+7.218491817" watchObservedRunningTime="2025-09-06 00:41:42.753086176 +0000 UTC m=+7.233722465" Sep 6 00:41:47.910235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2406117855.mount: Deactivated successfully. Sep 6 00:41:50.737546 env[1434]: time="2025-09-06T00:41:50.737492319Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:50.742673 env[1434]: time="2025-09-06T00:41:50.742627981Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:50.748440 env[1434]: time="2025-09-06T00:41:50.748394475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:50.749090 env[1434]: time="2025-09-06T00:41:50.749047208Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 6 00:41:50.750761 env[1434]: time="2025-09-06T00:41:50.750729694Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:41:50.753717 env[1434]: time="2025-09-06T00:41:50.752435981Z" level=info msg="CreateContainer within sandbox \"aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:41:50.797075 env[1434]: time="2025-09-06T00:41:50.797016457Z" level=info msg="CreateContainer within sandbox \"aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d\"" Sep 6 00:41:50.799377 env[1434]: time="2025-09-06T00:41:50.799335575Z" level=info msg="StartContainer for \"73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d\"" Sep 6 00:41:50.840102 systemd[1]: Started cri-containerd-73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d.scope. Sep 6 00:41:50.874855 env[1434]: time="2025-09-06T00:41:50.871858477Z" level=info msg="StartContainer for \"73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d\" returns successfully" Sep 6 00:41:50.887592 systemd[1]: cri-containerd-73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d.scope: Deactivated successfully. Sep 6 00:41:51.786461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d-rootfs.mount: Deactivated successfully. Sep 6 00:41:54.592401 env[1434]: time="2025-09-06T00:41:54.592322295Z" level=info msg="shim disconnected" id=73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d Sep 6 00:41:54.592401 env[1434]: time="2025-09-06T00:41:54.592387198Z" level=warning msg="cleaning up after shim disconnected" id=73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d namespace=k8s.io Sep 6 00:41:54.592401 env[1434]: time="2025-09-06T00:41:54.592400499Z" level=info msg="cleaning up dead shim" Sep 6 00:41:54.602333 env[1434]: time="2025-09-06T00:41:54.602276557Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:41:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2831 runtime=io.containerd.runc.v2\n" Sep 6 00:41:54.751603 env[1434]: time="2025-09-06T00:41:54.751541484Z" level=info msg="CreateContainer within sandbox \"aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:41:54.861835 env[1434]: time="2025-09-06T00:41:54.861661594Z" level=info msg="CreateContainer within sandbox \"aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d\"" Sep 6 00:41:54.863535 env[1434]: time="2025-09-06T00:41:54.862605038Z" level=info msg="StartContainer for \"c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d\"" Sep 6 00:41:54.896474 systemd[1]: Started cri-containerd-c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d.scope. Sep 6 00:41:54.935957 env[1434]: time="2025-09-06T00:41:54.935874338Z" level=info msg="StartContainer for \"c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d\" returns successfully" Sep 6 00:41:54.945835 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:41:54.946151 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:41:54.946563 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:41:54.949586 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:41:54.957983 systemd[1]: cri-containerd-c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d.scope: Deactivated successfully. Sep 6 00:41:54.969441 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:41:54.997649 env[1434]: time="2025-09-06T00:41:54.997582902Z" level=info msg="shim disconnected" id=c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d Sep 6 00:41:54.997649 env[1434]: time="2025-09-06T00:41:54.997639005Z" level=warning msg="cleaning up after shim disconnected" id=c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d namespace=k8s.io Sep 6 00:41:54.997649 env[1434]: time="2025-09-06T00:41:54.997652205Z" level=info msg="cleaning up dead shim" Sep 6 00:41:55.007324 env[1434]: time="2025-09-06T00:41:55.007259045Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:41:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2897 runtime=io.containerd.runc.v2\n" Sep 6 00:41:55.767874 env[1434]: time="2025-09-06T00:41:55.767794326Z" level=info msg="CreateContainer within sandbox \"aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:41:55.792058 systemd[1]: run-containerd-runc-k8s.io-c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d-runc.B2zHkP.mount: Deactivated successfully. Sep 6 00:41:55.792222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d-rootfs.mount: Deactivated successfully. Sep 6 00:41:55.838039 env[1434]: time="2025-09-06T00:41:55.837972908Z" level=info msg="CreateContainer within sandbox \"aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad\"" Sep 6 00:41:55.841499 env[1434]: time="2025-09-06T00:41:55.841454865Z" level=info msg="StartContainer for \"b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad\"" Sep 6 00:41:55.885978 systemd[1]: Started cri-containerd-b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad.scope. Sep 6 00:41:55.939095 systemd[1]: cri-containerd-b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad.scope: Deactivated successfully. Sep 6 00:41:55.942369 env[1434]: time="2025-09-06T00:41:55.941887019Z" level=info msg="StartContainer for \"b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad\" returns successfully" Sep 6 00:41:56.331527 env[1434]: time="2025-09-06T00:41:56.331453139Z" level=info msg="shim disconnected" id=b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad Sep 6 00:41:56.331527 env[1434]: time="2025-09-06T00:41:56.331519742Z" level=warning msg="cleaning up after shim disconnected" id=b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad namespace=k8s.io Sep 6 00:41:56.331527 env[1434]: time="2025-09-06T00:41:56.331533442Z" level=info msg="cleaning up dead shim" Sep 6 00:41:56.359810 env[1434]: time="2025-09-06T00:41:56.359736892Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:41:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2955 runtime=io.containerd.runc.v2\n" Sep 6 00:41:56.502001 env[1434]: time="2025-09-06T00:41:56.501933591Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:56.510277 env[1434]: time="2025-09-06T00:41:56.510223658Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:56.514267 env[1434]: time="2025-09-06T00:41:56.514226636Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:41:56.514801 env[1434]: time="2025-09-06T00:41:56.514755359Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 6 00:41:56.518060 env[1434]: time="2025-09-06T00:41:56.518024704Z" level=info msg="CreateContainer within sandbox \"3009c33596fa996c3ff2426967b4463cf4b35a96ba46f4b946ab7c2e9fb6206d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:41:56.552955 env[1434]: time="2025-09-06T00:41:56.552892249Z" level=info msg="CreateContainer within sandbox \"3009c33596fa996c3ff2426967b4463cf4b35a96ba46f4b946ab7c2e9fb6206d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3\"" Sep 6 00:41:56.555639 env[1434]: time="2025-09-06T00:41:56.553460174Z" level=info msg="StartContainer for \"4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3\"" Sep 6 00:41:56.575082 systemd[1]: Started cri-containerd-4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3.scope. Sep 6 00:41:56.631664 env[1434]: time="2025-09-06T00:41:56.631506231Z" level=info msg="StartContainer for \"4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3\" returns successfully" Sep 6 00:41:56.763985 env[1434]: time="2025-09-06T00:41:56.763921798Z" level=info msg="CreateContainer within sandbox \"aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:41:56.793414 systemd[1]: run-containerd-runc-k8s.io-b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad-runc.qAGyOX.mount: Deactivated successfully. Sep 6 00:41:56.793919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad-rootfs.mount: Deactivated successfully. Sep 6 00:41:56.804291 env[1434]: time="2025-09-06T00:41:56.804217683Z" level=info msg="CreateContainer within sandbox \"aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59\"" Sep 6 00:41:56.806112 env[1434]: time="2025-09-06T00:41:56.806077165Z" level=info msg="StartContainer for \"da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59\"" Sep 6 00:41:56.840756 systemd[1]: Started cri-containerd-da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59.scope. Sep 6 00:41:56.853657 systemd[1]: run-containerd-runc-k8s.io-da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59-runc.mBH2rm.mount: Deactivated successfully. Sep 6 00:41:56.900788 env[1434]: time="2025-09-06T00:41:56.900645155Z" level=info msg="StartContainer for \"da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59\" returns successfully" Sep 6 00:41:56.901176 systemd[1]: cri-containerd-da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59.scope: Deactivated successfully. Sep 6 00:41:56.932441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59-rootfs.mount: Deactivated successfully. Sep 6 00:41:56.983539 env[1434]: time="2025-09-06T00:41:56.983463724Z" level=info msg="shim disconnected" id=da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59 Sep 6 00:41:56.983539 env[1434]: time="2025-09-06T00:41:56.983534727Z" level=warning msg="cleaning up after shim disconnected" id=da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59 namespace=k8s.io Sep 6 00:41:56.983539 env[1434]: time="2025-09-06T00:41:56.983547728Z" level=info msg="cleaning up dead shim" Sep 6 00:41:56.998298 env[1434]: time="2025-09-06T00:41:56.998239378Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:41:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3049 runtime=io.containerd.runc.v2\n" Sep 6 00:41:57.769868 env[1434]: time="2025-09-06T00:41:57.769790889Z" level=info msg="CreateContainer within sandbox \"aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:41:57.793948 kubelet[2415]: I0906 00:41:57.793869 2415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-vd7vm" podStartSLOduration=2.635793486 podStartE2EDuration="16.79384183s" podCreationTimestamp="2025-09-06 00:41:41 +0000 UTC" firstStartedPulling="2025-09-06 00:41:42.35794147 +0000 UTC m=+6.838577659" lastFinishedPulling="2025-09-06 00:41:56.515989814 +0000 UTC m=+20.996626003" observedRunningTime="2025-09-06 00:41:57.07439788 +0000 UTC m=+21.555034069" watchObservedRunningTime="2025-09-06 00:41:57.79384183 +0000 UTC m=+22.274478019" Sep 6 00:41:57.811988 env[1434]: time="2025-09-06T00:41:57.811929213Z" level=info msg="CreateContainer within sandbox \"aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8\"" Sep 6 00:41:57.812929 env[1434]: time="2025-09-06T00:41:57.812889855Z" level=info msg="StartContainer for \"95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8\"" Sep 6 00:41:57.843762 systemd[1]: Started cri-containerd-95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8.scope. Sep 6 00:41:57.846515 systemd[1]: run-containerd-runc-k8s.io-95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8-runc.eoM5G3.mount: Deactivated successfully. Sep 6 00:41:57.898426 env[1434]: time="2025-09-06T00:41:57.898362156Z" level=info msg="StartContainer for \"95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8\" returns successfully" Sep 6 00:41:58.102787 kubelet[2415]: I0906 00:41:58.102005 2415 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 00:41:58.177170 systemd[1]: Created slice kubepods-burstable-podc582ddc0_a64b_4829_9a54_07fcc9eabc2c.slice. Sep 6 00:41:58.224577 systemd[1]: Created slice kubepods-burstable-pod7cccc5b1_b2b9_48f6_92f6_b752edfb7033.slice. Sep 6 00:41:58.242449 kubelet[2415]: I0906 00:41:58.242397 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c582ddc0-a64b-4829-9a54-07fcc9eabc2c-config-volume\") pod \"coredns-7c65d6cfc9-zldps\" (UID: \"c582ddc0-a64b-4829-9a54-07fcc9eabc2c\") " pod="kube-system/coredns-7c65d6cfc9-zldps" Sep 6 00:41:58.242449 kubelet[2415]: I0906 00:41:58.242459 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtf22\" (UniqueName: \"kubernetes.io/projected/c582ddc0-a64b-4829-9a54-07fcc9eabc2c-kube-api-access-xtf22\") pod \"coredns-7c65d6cfc9-zldps\" (UID: \"c582ddc0-a64b-4829-9a54-07fcc9eabc2c\") " pod="kube-system/coredns-7c65d6cfc9-zldps" Sep 6 00:41:58.343082 kubelet[2415]: I0906 00:41:58.343021 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cccc5b1-b2b9-48f6-92f6-b752edfb7033-config-volume\") pod \"coredns-7c65d6cfc9-g5xbb\" (UID: \"7cccc5b1-b2b9-48f6-92f6-b752edfb7033\") " pod="kube-system/coredns-7c65d6cfc9-g5xbb" Sep 6 00:41:58.343082 kubelet[2415]: I0906 00:41:58.343088 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7fvt\" (UniqueName: \"kubernetes.io/projected/7cccc5b1-b2b9-48f6-92f6-b752edfb7033-kube-api-access-d7fvt\") pod \"coredns-7c65d6cfc9-g5xbb\" (UID: \"7cccc5b1-b2b9-48f6-92f6-b752edfb7033\") " pod="kube-system/coredns-7c65d6cfc9-g5xbb" Sep 6 00:41:58.484851 env[1434]: time="2025-09-06T00:41:58.484167451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zldps,Uid:c582ddc0-a64b-4829-9a54-07fcc9eabc2c,Namespace:kube-system,Attempt:0,}" Sep 6 00:41:58.535882 env[1434]: time="2025-09-06T00:41:58.535796736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g5xbb,Uid:7cccc5b1-b2b9-48f6-92f6-b752edfb7033,Namespace:kube-system,Attempt:0,}" Sep 6 00:41:58.849481 kubelet[2415]: I0906 00:41:58.849399 2415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fnpsf" podStartSLOduration=9.304832932 podStartE2EDuration="17.849370508s" podCreationTimestamp="2025-09-06 00:41:41 +0000 UTC" firstStartedPulling="2025-09-06 00:41:42.206006209 +0000 UTC m=+6.686642398" lastFinishedPulling="2025-09-06 00:41:50.750543685 +0000 UTC m=+15.231179974" observedRunningTime="2025-09-06 00:41:58.813285181 +0000 UTC m=+23.293921470" watchObservedRunningTime="2025-09-06 00:41:58.849370508 +0000 UTC m=+23.330006697" Sep 6 00:42:00.583373 systemd-networkd[1585]: cilium_host: Link UP Sep 6 00:42:00.587990 systemd-networkd[1585]: cilium_net: Link UP Sep 6 00:42:00.600028 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 6 00:42:00.609530 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:42:00.608871 systemd-networkd[1585]: cilium_net: Gained carrier Sep 6 00:42:00.609241 systemd-networkd[1585]: cilium_host: Gained carrier Sep 6 00:42:00.741997 systemd-networkd[1585]: cilium_host: Gained IPv6LL Sep 6 00:42:00.907397 systemd-networkd[1585]: cilium_vxlan: Link UP Sep 6 00:42:00.907411 systemd-networkd[1585]: cilium_vxlan: Gained carrier Sep 6 00:42:01.203912 kernel: NET: Registered PF_ALG protocol family Sep 6 00:42:01.318069 systemd-networkd[1585]: cilium_net: Gained IPv6LL Sep 6 00:42:02.087558 systemd-networkd[1585]: lxc_health: Link UP Sep 6 00:42:02.128857 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:42:02.129235 systemd-networkd[1585]: lxc_health: Gained carrier Sep 6 00:42:02.406012 systemd-networkd[1585]: cilium_vxlan: Gained IPv6LL Sep 6 00:42:02.555252 systemd-networkd[1585]: lxc8c4411433f57: Link UP Sep 6 00:42:02.564857 kernel: eth0: renamed from tmpaa9ec Sep 6 00:42:02.578401 systemd-networkd[1585]: lxc8c4411433f57: Gained carrier Sep 6 00:42:02.578841 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8c4411433f57: link becomes ready Sep 6 00:42:02.615624 systemd-networkd[1585]: lxccd3ed06b1cd7: Link UP Sep 6 00:42:02.624857 kernel: eth0: renamed from tmp41718 Sep 6 00:42:02.641571 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccd3ed06b1cd7: link becomes ready Sep 6 00:42:02.638584 systemd-networkd[1585]: lxccd3ed06b1cd7: Gained carrier Sep 6 00:42:04.006106 systemd-networkd[1585]: lxc_health: Gained IPv6LL Sep 6 00:42:04.262298 systemd-networkd[1585]: lxccd3ed06b1cd7: Gained IPv6LL Sep 6 00:42:04.582169 systemd-networkd[1585]: lxc8c4411433f57: Gained IPv6LL Sep 6 00:42:06.516324 env[1434]: time="2025-09-06T00:42:06.516228655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:42:06.516950 env[1434]: time="2025-09-06T00:42:06.516913779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:42:06.517107 env[1434]: time="2025-09-06T00:42:06.517080585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:42:06.518993 env[1434]: time="2025-09-06T00:42:06.518945552Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/417186726e99d75ad2ceb7058389136bf753e056acccc2d344f030acba2adf76 pid=3602 runtime=io.containerd.runc.v2 Sep 6 00:42:06.552694 systemd[1]: Started cri-containerd-417186726e99d75ad2ceb7058389136bf753e056acccc2d344f030acba2adf76.scope. Sep 6 00:42:06.559275 systemd[1]: run-containerd-runc-k8s.io-417186726e99d75ad2ceb7058389136bf753e056acccc2d344f030acba2adf76-runc.ohmmyv.mount: Deactivated successfully. Sep 6 00:42:06.609019 env[1434]: time="2025-09-06T00:42:06.608928852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:42:06.609236 env[1434]: time="2025-09-06T00:42:06.609027056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:42:06.609236 env[1434]: time="2025-09-06T00:42:06.609057357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:42:06.609236 env[1434]: time="2025-09-06T00:42:06.609210362Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa9ecb1942883242530e115b39464da9db01864601d2360fc102cbe4ee2fc198 pid=3636 runtime=io.containerd.runc.v2 Sep 6 00:42:06.650885 systemd[1]: Started cri-containerd-aa9ecb1942883242530e115b39464da9db01864601d2360fc102cbe4ee2fc198.scope. Sep 6 00:42:06.698621 env[1434]: time="2025-09-06T00:42:06.698546140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g5xbb,Uid:7cccc5b1-b2b9-48f6-92f6-b752edfb7033,Namespace:kube-system,Attempt:0,} returns sandbox id \"417186726e99d75ad2ceb7058389136bf753e056acccc2d344f030acba2adf76\"" Sep 6 00:42:06.704472 env[1434]: time="2025-09-06T00:42:06.704310045Z" level=info msg="CreateContainer within sandbox \"417186726e99d75ad2ceb7058389136bf753e056acccc2d344f030acba2adf76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:42:06.744999 env[1434]: time="2025-09-06T00:42:06.744907589Z" level=info msg="CreateContainer within sandbox \"417186726e99d75ad2ceb7058389136bf753e056acccc2d344f030acba2adf76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e557426f40d5519c1b32244afa9fb1599f9865a5c98e76dc772902bd2a396af\"" Sep 6 00:42:06.748367 env[1434]: time="2025-09-06T00:42:06.748329310Z" level=info msg="StartContainer for \"7e557426f40d5519c1b32244afa9fb1599f9865a5c98e76dc772902bd2a396af\"" Sep 6 00:42:06.748784 env[1434]: time="2025-09-06T00:42:06.748746825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zldps,Uid:c582ddc0-a64b-4829-9a54-07fcc9eabc2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa9ecb1942883242530e115b39464da9db01864601d2360fc102cbe4ee2fc198\"" Sep 6 00:42:06.752656 env[1434]: time="2025-09-06T00:42:06.752617563Z" level=info msg="CreateContainer within sandbox \"aa9ecb1942883242530e115b39464da9db01864601d2360fc102cbe4ee2fc198\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:42:06.789080 systemd[1]: Started cri-containerd-7e557426f40d5519c1b32244afa9fb1599f9865a5c98e76dc772902bd2a396af.scope. Sep 6 00:42:06.793610 env[1434]: time="2025-09-06T00:42:06.792249872Z" level=info msg="CreateContainer within sandbox \"aa9ecb1942883242530e115b39464da9db01864601d2360fc102cbe4ee2fc198\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2fbefddd037a0552ab0f27c3c5536b8ab9e4dd0b9ff188013825b323f2a7e081\"" Sep 6 00:42:06.795306 env[1434]: time="2025-09-06T00:42:06.795270580Z" level=info msg="StartContainer for \"2fbefddd037a0552ab0f27c3c5536b8ab9e4dd0b9ff188013825b323f2a7e081\"" Sep 6 00:42:06.832351 systemd[1]: Started cri-containerd-2fbefddd037a0552ab0f27c3c5536b8ab9e4dd0b9ff188013825b323f2a7e081.scope. Sep 6 00:42:06.886178 env[1434]: time="2025-09-06T00:42:06.886099710Z" level=info msg="StartContainer for \"7e557426f40d5519c1b32244afa9fb1599f9865a5c98e76dc772902bd2a396af\" returns successfully" Sep 6 00:42:06.890978 env[1434]: time="2025-09-06T00:42:06.890902281Z" level=info msg="StartContainer for \"2fbefddd037a0552ab0f27c3c5536b8ab9e4dd0b9ff188013825b323f2a7e081\" returns successfully" Sep 6 00:42:07.829365 kubelet[2415]: I0906 00:42:07.829280 2415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-g5xbb" podStartSLOduration=26.829256452 podStartE2EDuration="26.829256452s" podCreationTimestamp="2025-09-06 00:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:42:07.828613729 +0000 UTC m=+32.309250018" watchObservedRunningTime="2025-09-06 00:42:07.829256452 +0000 UTC m=+32.309892641" Sep 6 00:42:07.866725 kubelet[2415]: I0906 00:42:07.866649 2415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zldps" podStartSLOduration=26.866617853 podStartE2EDuration="26.866617853s" podCreationTimestamp="2025-09-06 00:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:42:07.866108135 +0000 UTC m=+32.346744424" watchObservedRunningTime="2025-09-06 00:42:07.866617853 +0000 UTC m=+32.347254042" Sep 6 00:42:58.593868 update_engine[1422]: I0906 00:42:58.593780 1422 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 6 00:42:58.593868 update_engine[1422]: I0906 00:42:58.593868 1422 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 6 00:42:58.594623 update_engine[1422]: I0906 00:42:58.594067 1422 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 6 00:42:58.594774 update_engine[1422]: I0906 00:42:58.594723 1422 omaha_request_params.cc:62] Current group set to lts Sep 6 00:42:58.595213 update_engine[1422]: I0906 00:42:58.594964 1422 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 6 00:42:58.595213 update_engine[1422]: I0906 00:42:58.594982 1422 update_attempter.cc:643] Scheduling an action processor start. Sep 6 00:42:58.595213 update_engine[1422]: I0906 00:42:58.595005 1422 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 6 00:42:58.595213 update_engine[1422]: I0906 00:42:58.595046 1422 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 6 00:42:58.595213 update_engine[1422]: I0906 00:42:58.595120 1422 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 6 00:42:58.595213 update_engine[1422]: I0906 00:42:58.595126 1422 omaha_request_action.cc:271] Request: Sep 6 00:42:58.595213 update_engine[1422]: Sep 6 00:42:58.595213 update_engine[1422]: Sep 6 00:42:58.595213 update_engine[1422]: Sep 6 00:42:58.595213 update_engine[1422]: Sep 6 00:42:58.595213 update_engine[1422]: Sep 6 00:42:58.595213 update_engine[1422]: Sep 6 00:42:58.595213 update_engine[1422]: Sep 6 00:42:58.595213 update_engine[1422]: Sep 6 00:42:58.595213 update_engine[1422]: I0906 00:42:58.595134 1422 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:42:58.595713 locksmithd[1510]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 6 00:42:58.642583 update_engine[1422]: I0906 00:42:58.642520 1422 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:42:58.643038 update_engine[1422]: I0906 00:42:58.642928 1422 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:42:58.673624 update_engine[1422]: E0906 00:42:58.673555 1422 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:42:58.673912 update_engine[1422]: I0906 00:42:58.673743 1422 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 6 00:43:08.549600 update_engine[1422]: I0906 00:43:08.548990 1422 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:43:08.549600 update_engine[1422]: I0906 00:43:08.549317 1422 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:43:08.549600 update_engine[1422]: I0906 00:43:08.549548 1422 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:43:08.578113 update_engine[1422]: E0906 00:43:08.578027 1422 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:43:08.578376 update_engine[1422]: I0906 00:43:08.578222 1422 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 6 00:43:18.555002 update_engine[1422]: I0906 00:43:18.554924 1422 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:43:18.555549 update_engine[1422]: I0906 00:43:18.555323 1422 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:43:18.555699 update_engine[1422]: I0906 00:43:18.555645 1422 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:43:18.626536 update_engine[1422]: E0906 00:43:18.626466 1422 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:43:18.626783 update_engine[1422]: I0906 00:43:18.626654 1422 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 6 00:43:28.554457 update_engine[1422]: I0906 00:43:28.554329 1422 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:43:28.555982 update_engine[1422]: I0906 00:43:28.555392 1422 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:43:28.556269 update_engine[1422]: I0906 00:43:28.556188 1422 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:43:28.572492 update_engine[1422]: E0906 00:43:28.572444 1422 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:43:28.572645 update_engine[1422]: I0906 00:43:28.572584 1422 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 6 00:43:28.572645 update_engine[1422]: I0906 00:43:28.572594 1422 omaha_request_action.cc:621] Omaha request response: Sep 6 00:43:28.572737 update_engine[1422]: E0906 00:43:28.572700 1422 omaha_request_action.cc:640] Omaha request network transfer failed. Sep 6 00:43:28.572737 update_engine[1422]: I0906 00:43:28.572719 1422 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 6 00:43:28.572737 update_engine[1422]: I0906 00:43:28.572724 1422 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 6 00:43:28.572737 update_engine[1422]: I0906 00:43:28.572729 1422 update_attempter.cc:306] Processing Done. Sep 6 00:43:28.572908 update_engine[1422]: E0906 00:43:28.572748 1422 update_attempter.cc:619] Update failed. Sep 6 00:43:28.572908 update_engine[1422]: I0906 00:43:28.572754 1422 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 6 00:43:28.572908 update_engine[1422]: I0906 00:43:28.572759 1422 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 6 00:43:28.572908 update_engine[1422]: I0906 00:43:28.572765 1422 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 6 00:43:28.572908 update_engine[1422]: I0906 00:43:28.572893 1422 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 6 00:43:28.573104 update_engine[1422]: I0906 00:43:28.572922 1422 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 6 00:43:28.573104 update_engine[1422]: I0906 00:43:28.572928 1422 omaha_request_action.cc:271] Request: Sep 6 00:43:28.573104 update_engine[1422]: Sep 6 00:43:28.573104 update_engine[1422]: Sep 6 00:43:28.573104 update_engine[1422]: Sep 6 00:43:28.573104 update_engine[1422]: Sep 6 00:43:28.573104 update_engine[1422]: Sep 6 00:43:28.573104 update_engine[1422]: Sep 6 00:43:28.573104 update_engine[1422]: I0906 00:43:28.572934 1422 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 00:43:28.573423 update_engine[1422]: I0906 00:43:28.573122 1422 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 00:43:28.573423 update_engine[1422]: I0906 00:43:28.573302 1422 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 00:43:28.573717 locksmithd[1510]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 6 00:43:28.595899 update_engine[1422]: E0906 00:43:28.595863 1422 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 00:43:28.596074 update_engine[1422]: I0906 00:43:28.595981 1422 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 6 00:43:28.596074 update_engine[1422]: I0906 00:43:28.595992 1422 omaha_request_action.cc:621] Omaha request response: Sep 6 00:43:28.596074 update_engine[1422]: I0906 00:43:28.596000 1422 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 6 00:43:28.596074 update_engine[1422]: I0906 00:43:28.596004 1422 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 6 00:43:28.596074 update_engine[1422]: I0906 00:43:28.596009 1422 update_attempter.cc:306] Processing Done. Sep 6 00:43:28.596074 update_engine[1422]: I0906 00:43:28.596015 1422 update_attempter.cc:310] Error event sent. Sep 6 00:43:28.596074 update_engine[1422]: I0906 00:43:28.596025 1422 update_check_scheduler.cc:74] Next update check in 48m20s Sep 6 00:43:28.596479 locksmithd[1510]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 6 00:43:41.710946 systemd[1]: Started sshd@5-10.200.8.17:22-10.200.16.10:53002.service. Sep 6 00:43:42.344618 sshd[3780]: Accepted publickey for core from 10.200.16.10 port 53002 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:43:42.346783 sshd[3780]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:43:42.354490 systemd[1]: Started session-8.scope. Sep 6 00:43:42.356473 systemd-logind[1421]: New session 8 of user core. Sep 6 00:43:42.866310 sshd[3780]: pam_unix(sshd:session): session closed for user core Sep 6 00:43:42.869862 systemd[1]: sshd@5-10.200.8.17:22-10.200.16.10:53002.service: Deactivated successfully. Sep 6 00:43:42.870991 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:43:42.871720 systemd-logind[1421]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:43:42.872562 systemd-logind[1421]: Removed session 8. Sep 6 00:43:47.980564 systemd[1]: Started sshd@6-10.200.8.17:22-10.200.16.10:53008.service. Sep 6 00:43:48.613691 sshd[3795]: Accepted publickey for core from 10.200.16.10 port 53008 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:43:48.615546 sshd[3795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:43:48.621589 systemd-logind[1421]: New session 9 of user core. Sep 6 00:43:48.622515 systemd[1]: Started session-9.scope. Sep 6 00:43:49.130308 sshd[3795]: pam_unix(sshd:session): session closed for user core Sep 6 00:43:49.134149 systemd[1]: sshd@6-10.200.8.17:22-10.200.16.10:53008.service: Deactivated successfully. Sep 6 00:43:49.135468 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:43:49.136343 systemd-logind[1421]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:43:49.137320 systemd-logind[1421]: Removed session 9. Sep 6 00:43:54.238348 systemd[1]: Started sshd@7-10.200.8.17:22-10.200.16.10:52782.service. Sep 6 00:43:54.874358 sshd[3808]: Accepted publickey for core from 10.200.16.10 port 52782 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:43:54.876628 sshd[3808]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:43:54.882483 systemd[1]: Started session-10.scope. Sep 6 00:43:54.883007 systemd-logind[1421]: New session 10 of user core. Sep 6 00:43:55.392308 sshd[3808]: pam_unix(sshd:session): session closed for user core Sep 6 00:43:55.395716 systemd[1]: sshd@7-10.200.8.17:22-10.200.16.10:52782.service: Deactivated successfully. Sep 6 00:43:55.396890 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:43:55.397607 systemd-logind[1421]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:43:55.398498 systemd-logind[1421]: Removed session 10. Sep 6 00:44:00.500244 systemd[1]: Started sshd@8-10.200.8.17:22-10.200.16.10:51496.service. Sep 6 00:44:01.133425 sshd[3821]: Accepted publickey for core from 10.200.16.10 port 51496 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:44:01.135435 sshd[3821]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:44:01.141047 systemd-logind[1421]: New session 11 of user core. Sep 6 00:44:01.141619 systemd[1]: Started session-11.scope. Sep 6 00:44:01.653756 sshd[3821]: pam_unix(sshd:session): session closed for user core Sep 6 00:44:01.660101 systemd[1]: sshd@8-10.200.8.17:22-10.200.16.10:51496.service: Deactivated successfully. Sep 6 00:44:01.661190 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:44:01.661747 systemd-logind[1421]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:44:01.662764 systemd-logind[1421]: Removed session 11. Sep 6 00:44:01.763115 systemd[1]: Started sshd@9-10.200.8.17:22-10.200.16.10:51498.service. Sep 6 00:44:02.397577 sshd[3833]: Accepted publickey for core from 10.200.16.10 port 51498 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:44:02.399519 sshd[3833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:44:02.406584 systemd-logind[1421]: New session 12 of user core. Sep 6 00:44:02.407765 systemd[1]: Started session-12.scope. Sep 6 00:44:02.955124 sshd[3833]: pam_unix(sshd:session): session closed for user core Sep 6 00:44:02.958745 systemd[1]: sshd@9-10.200.8.17:22-10.200.16.10:51498.service: Deactivated successfully. Sep 6 00:44:02.959878 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:44:02.960623 systemd-logind[1421]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:44:02.961501 systemd-logind[1421]: Removed session 12. Sep 6 00:44:03.063807 systemd[1]: Started sshd@10-10.200.8.17:22-10.200.16.10:51512.service. Sep 6 00:44:03.697245 sshd[3844]: Accepted publickey for core from 10.200.16.10 port 51512 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:44:03.699708 sshd[3844]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:44:03.705566 systemd[1]: Started session-13.scope. Sep 6 00:44:03.706118 systemd-logind[1421]: New session 13 of user core. Sep 6 00:44:04.215375 sshd[3844]: pam_unix(sshd:session): session closed for user core Sep 6 00:44:04.218802 systemd[1]: sshd@10-10.200.8.17:22-10.200.16.10:51512.service: Deactivated successfully. Sep 6 00:44:04.219938 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:44:04.220679 systemd-logind[1421]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:44:04.221630 systemd-logind[1421]: Removed session 13. Sep 6 00:44:09.323357 systemd[1]: Started sshd@11-10.200.8.17:22-10.200.16.10:51528.service. Sep 6 00:44:09.955839 sshd[3857]: Accepted publickey for core from 10.200.16.10 port 51528 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:44:09.957599 sshd[3857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:44:09.963365 systemd-logind[1421]: New session 14 of user core. Sep 6 00:44:09.963966 systemd[1]: Started session-14.scope. Sep 6 00:44:10.474038 sshd[3857]: pam_unix(sshd:session): session closed for user core Sep 6 00:44:10.477837 systemd[1]: sshd@11-10.200.8.17:22-10.200.16.10:51528.service: Deactivated successfully. Sep 6 00:44:10.479009 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:44:10.479768 systemd-logind[1421]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:44:10.480699 systemd-logind[1421]: Removed session 14. Sep 6 00:44:15.582895 systemd[1]: Started sshd@12-10.200.8.17:22-10.200.16.10:45422.service. Sep 6 00:44:16.216086 sshd[3871]: Accepted publickey for core from 10.200.16.10 port 45422 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:44:16.217913 sshd[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:44:16.223624 systemd[1]: Started session-15.scope. Sep 6 00:44:16.224350 systemd-logind[1421]: New session 15 of user core. Sep 6 00:44:16.744690 sshd[3871]: pam_unix(sshd:session): session closed for user core Sep 6 00:44:16.749327 systemd[1]: sshd@12-10.200.8.17:22-10.200.16.10:45422.service: Deactivated successfully. Sep 6 00:44:16.750338 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:44:16.751062 systemd-logind[1421]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:44:16.752008 systemd-logind[1421]: Removed session 15. Sep 6 00:44:16.852786 systemd[1]: Started sshd@13-10.200.8.17:22-10.200.16.10:45430.service. Sep 6 00:44:17.485761 sshd[3883]: Accepted publickey for core from 10.200.16.10 port 45430 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:44:17.487582 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:44:17.492035 systemd-logind[1421]: New session 16 of user core. Sep 6 00:44:17.494247 systemd[1]: Started session-16.scope. Sep 6 00:44:18.084067 sshd[3883]: pam_unix(sshd:session): session closed for user core Sep 6 00:44:18.087614 systemd[1]: sshd@13-10.200.8.17:22-10.200.16.10:45430.service: Deactivated successfully. Sep 6 00:44:18.088755 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:44:18.089550 systemd-logind[1421]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:44:18.090548 systemd-logind[1421]: Removed session 16. Sep 6 00:44:18.191271 systemd[1]: Started sshd@14-10.200.8.17:22-10.200.16.10:45438.service. Sep 6 00:44:18.826036 sshd[3893]: Accepted publickey for core from 10.200.16.10 port 45438 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:44:18.827766 sshd[3893]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:44:18.833630 systemd[1]: Started session-17.scope. Sep 6 00:44:18.834231 systemd-logind[1421]: New session 17 of user core. Sep 6 00:44:20.784188 sshd[3893]: pam_unix(sshd:session): session closed for user core Sep 6 00:44:20.788091 systemd[1]: sshd@14-10.200.8.17:22-10.200.16.10:45438.service: Deactivated successfully. Sep 6 00:44:20.789181 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:44:20.789975 systemd-logind[1421]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:44:20.791004 systemd-logind[1421]: Removed session 17. Sep 6 00:44:20.892009 systemd[1]: Started sshd@15-10.200.8.17:22-10.200.16.10:48664.service. Sep 6 00:44:21.527401 sshd[3911]: Accepted publickey for core from 10.200.16.10 port 48664 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:44:21.529171 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:44:21.534874 systemd[1]: Started session-18.scope. Sep 6 00:44:21.535540 systemd-logind[1421]: New session 18 of user core. Sep 6 00:44:22.161858 sshd[3911]: pam_unix(sshd:session): session closed for user core Sep 6 00:44:22.165506 systemd[1]: sshd@15-10.200.8.17:22-10.200.16.10:48664.service: Deactivated successfully. Sep 6 00:44:22.166619 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:44:22.167397 systemd-logind[1421]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:44:22.168390 systemd-logind[1421]: Removed session 18. Sep 6 00:44:22.271130 systemd[1]: Started sshd@16-10.200.8.17:22-10.200.16.10:48674.service. Sep 6 00:44:22.904292 sshd[3921]: Accepted publickey for core from 10.200.16.10 port 48674 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:44:22.906051 sshd[3921]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:44:22.910877 systemd-logind[1421]: New session 19 of user core. Sep 6 00:44:22.911875 systemd[1]: Started session-19.scope. Sep 6 00:44:23.419097 sshd[3921]: pam_unix(sshd:session): session closed for user core Sep 6 00:44:23.422980 systemd[1]: sshd@16-10.200.8.17:22-10.200.16.10:48674.service: Deactivated successfully. Sep 6 00:44:23.424125 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:44:23.424921 systemd-logind[1421]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:44:23.425809 systemd-logind[1421]: Removed session 19. Sep 6 00:44:28.527882 systemd[1]: Started sshd@17-10.200.8.17:22-10.200.16.10:48686.service. Sep 6 00:44:29.170081 sshd[3936]: Accepted publickey for core from 10.200.16.10 port 48686 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:44:29.171915 sshd[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:44:29.177645 systemd-logind[1421]: New session 20 of user core. Sep 6 00:44:29.178242 systemd[1]: Started session-20.scope. Sep 6 00:44:29.681805 sshd[3936]: pam_unix(sshd:session): session closed for user core Sep 6 00:44:29.685226 systemd[1]: sshd@17-10.200.8.17:22-10.200.16.10:48686.service: Deactivated successfully. Sep 6 00:44:29.686384 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:44:29.687181 systemd-logind[1421]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:44:29.688203 systemd-logind[1421]: Removed session 20. Sep 6 00:44:34.788963 systemd[1]: Started sshd@18-10.200.8.17:22-10.200.16.10:37962.service. Sep 6 00:44:35.422713 sshd[3948]: Accepted publickey for core from 10.200.16.10 port 37962 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:44:35.424489 sshd[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:44:35.430064 systemd-logind[1421]: New session 21 of user core. Sep 6 00:44:35.430704 systemd[1]: Started session-21.scope. Sep 6 00:44:35.936597 sshd[3948]: pam_unix(sshd:session): session closed for user core Sep 6 00:44:35.940188 systemd[1]: sshd@18-10.200.8.17:22-10.200.16.10:37962.service: Deactivated successfully. Sep 6 00:44:35.941314 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:44:35.942124 systemd-logind[1421]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:44:35.943614 systemd-logind[1421]: Removed session 21. Sep 6 00:44:41.058586 systemd[1]: Started sshd@19-10.200.8.17:22-10.200.16.10:37786.service. Sep 6 00:44:41.691928 sshd[3962]: Accepted publickey for core from 10.200.16.10 port 37786 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:44:41.693555 sshd[3962]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:44:41.699110 systemd-logind[1421]: New session 22 of user core. Sep 6 00:44:41.699700 systemd[1]: Started session-22.scope. Sep 6 00:44:42.207183 sshd[3962]: pam_unix(sshd:session): session closed for user core Sep 6 00:44:42.210630 systemd-logind[1421]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:44:42.211080 systemd[1]: sshd@19-10.200.8.17:22-10.200.16.10:37786.service: Deactivated successfully. Sep 6 00:44:42.212216 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:44:42.213217 systemd-logind[1421]: Removed session 22. Sep 6 00:44:42.314966 systemd[1]: Started sshd@20-10.200.8.17:22-10.200.16.10:37802.service. Sep 6 00:44:42.947341 sshd[3974]: Accepted publickey for core from 10.200.16.10 port 37802 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:44:42.949125 sshd[3974]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:44:42.954944 systemd[1]: Started session-23.scope. Sep 6 00:44:42.955673 systemd-logind[1421]: New session 23 of user core. Sep 6 00:44:44.616082 env[1434]: time="2025-09-06T00:44:44.616019841Z" level=info msg="StopContainer for \"4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3\" with timeout 30 (s)" Sep 6 00:44:44.617449 env[1434]: time="2025-09-06T00:44:44.617392845Z" level=info msg="Stop container \"4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3\" with signal terminated" Sep 6 00:44:44.640299 systemd[1]: cri-containerd-4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3.scope: Deactivated successfully. Sep 6 00:44:44.648449 systemd[1]: run-containerd-runc-k8s.io-95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8-runc.Qqw3hB.mount: Deactivated successfully. Sep 6 00:44:44.678625 env[1434]: time="2025-09-06T00:44:44.678540754Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:44:44.685229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3-rootfs.mount: Deactivated successfully. Sep 6 00:44:44.690067 env[1434]: time="2025-09-06T00:44:44.690015894Z" level=info msg="StopContainer for \"95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8\" with timeout 2 (s)" Sep 6 00:44:44.690892 env[1434]: time="2025-09-06T00:44:44.690847296Z" level=info msg="Stop container \"95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8\" with signal terminated" Sep 6 00:44:44.699861 systemd-networkd[1585]: lxc_health: Link DOWN Sep 6 00:44:44.699871 systemd-networkd[1585]: lxc_health: Lost carrier Sep 6 00:44:44.726093 systemd[1]: cri-containerd-95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8.scope: Deactivated successfully. Sep 6 00:44:44.726378 systemd[1]: cri-containerd-95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8.scope: Consumed 7.723s CPU time. Sep 6 00:44:44.757115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8-rootfs.mount: Deactivated successfully. Sep 6 00:44:44.772369 env[1434]: time="2025-09-06T00:44:44.772303075Z" level=info msg="shim disconnected" id=4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3 Sep 6 00:44:44.772671 env[1434]: time="2025-09-06T00:44:44.772641376Z" level=warning msg="cleaning up after shim disconnected" id=4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3 namespace=k8s.io Sep 6 00:44:44.772763 env[1434]: time="2025-09-06T00:44:44.772684176Z" level=info msg="cleaning up dead shim" Sep 6 00:44:44.772942 env[1434]: time="2025-09-06T00:44:44.772598676Z" level=info msg="shim disconnected" id=95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8 Sep 6 00:44:44.773040 env[1434]: time="2025-09-06T00:44:44.773026077Z" level=warning msg="cleaning up after shim disconnected" id=95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8 namespace=k8s.io Sep 6 00:44:44.773103 env[1434]: time="2025-09-06T00:44:44.773093378Z" level=info msg="cleaning up dead shim" Sep 6 00:44:44.786225 env[1434]: time="2025-09-06T00:44:44.786170622Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:44:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4046 runtime=io.containerd.runc.v2\n" Sep 6 00:44:44.787191 env[1434]: time="2025-09-06T00:44:44.787153826Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:44:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4047 runtime=io.containerd.runc.v2\n" Sep 6 00:44:44.793118 env[1434]: time="2025-09-06T00:44:44.793081546Z" level=info msg="StopContainer for \"4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3\" returns successfully" Sep 6 00:44:44.793926 env[1434]: time="2025-09-06T00:44:44.793892749Z" level=info msg="StopPodSandbox for \"3009c33596fa996c3ff2426967b4463cf4b35a96ba46f4b946ab7c2e9fb6206d\"" Sep 6 00:44:44.794016 env[1434]: time="2025-09-06T00:44:44.793982849Z" level=info msg="Container to stop \"4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:44:44.796125 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3009c33596fa996c3ff2426967b4463cf4b35a96ba46f4b946ab7c2e9fb6206d-shm.mount: Deactivated successfully. Sep 6 00:44:44.797035 env[1434]: time="2025-09-06T00:44:44.797002859Z" level=info msg="StopContainer for \"95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8\" returns successfully" Sep 6 00:44:44.797753 env[1434]: time="2025-09-06T00:44:44.797717962Z" level=info msg="StopPodSandbox for \"aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e\"" Sep 6 00:44:44.797858 env[1434]: time="2025-09-06T00:44:44.797809662Z" level=info msg="Container to stop \"c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:44:44.797919 env[1434]: time="2025-09-06T00:44:44.797855162Z" level=info msg="Container to stop \"da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:44:44.797919 env[1434]: time="2025-09-06T00:44:44.797874862Z" level=info msg="Container to stop \"73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:44:44.797919 env[1434]: time="2025-09-06T00:44:44.797892562Z" level=info msg="Container to stop \"b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:44:44.797919 env[1434]: time="2025-09-06T00:44:44.797909062Z" level=info msg="Container to stop \"95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:44:44.809048 systemd[1]: cri-containerd-aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e.scope: Deactivated successfully. Sep 6 00:44:44.810867 systemd[1]: cri-containerd-3009c33596fa996c3ff2426967b4463cf4b35a96ba46f4b946ab7c2e9fb6206d.scope: Deactivated successfully. Sep 6 00:44:44.863900 env[1434]: time="2025-09-06T00:44:44.863699387Z" level=info msg="shim disconnected" id=3009c33596fa996c3ff2426967b4463cf4b35a96ba46f4b946ab7c2e9fb6206d Sep 6 00:44:44.863900 env[1434]: time="2025-09-06T00:44:44.863770287Z" level=warning msg="cleaning up after shim disconnected" id=3009c33596fa996c3ff2426967b4463cf4b35a96ba46f4b946ab7c2e9fb6206d namespace=k8s.io Sep 6 00:44:44.863900 env[1434]: time="2025-09-06T00:44:44.863787888Z" level=info msg="cleaning up dead shim" Sep 6 00:44:44.864371 env[1434]: time="2025-09-06T00:44:44.864330589Z" level=info msg="shim disconnected" id=aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e Sep 6 00:44:44.864675 env[1434]: time="2025-09-06T00:44:44.864452690Z" level=warning msg="cleaning up after shim disconnected" id=aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e namespace=k8s.io Sep 6 00:44:44.864791 env[1434]: time="2025-09-06T00:44:44.864769991Z" level=info msg="cleaning up dead shim" Sep 6 00:44:44.881371 env[1434]: time="2025-09-06T00:44:44.879546941Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:44:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4111 runtime=io.containerd.runc.v2\n" Sep 6 00:44:44.882160 env[1434]: time="2025-09-06T00:44:44.882118650Z" level=info msg="TearDown network for sandbox \"aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e\" successfully" Sep 6 00:44:44.882291 env[1434]: time="2025-09-06T00:44:44.882155950Z" level=info msg="StopPodSandbox for \"aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e\" returns successfully" Sep 6 00:44:44.886776 env[1434]: time="2025-09-06T00:44:44.886738466Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:44:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4110 runtime=io.containerd.runc.v2\n" Sep 6 00:44:44.887397 env[1434]: time="2025-09-06T00:44:44.887368968Z" level=info msg="TearDown network for sandbox \"3009c33596fa996c3ff2426967b4463cf4b35a96ba46f4b946ab7c2e9fb6206d\" successfully" Sep 6 00:44:44.887553 env[1434]: time="2025-09-06T00:44:44.887528969Z" level=info msg="StopPodSandbox for \"3009c33596fa996c3ff2426967b4463cf4b35a96ba46f4b946ab7c2e9fb6206d\" returns successfully" Sep 6 00:44:44.961043 kubelet[2415]: I0906 00:44:44.960987 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e38beb11-f548-4cc3-86fb-1edec83f3295-cilium-config-path\") pod \"e38beb11-f548-4cc3-86fb-1edec83f3295\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " Sep 6 00:44:44.961043 kubelet[2415]: I0906 00:44:44.961038 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-cni-path\") pod \"e38beb11-f548-4cc3-86fb-1edec83f3295\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " Sep 6 00:44:44.961708 kubelet[2415]: I0906 00:44:44.961067 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krjd5\" (UniqueName: \"kubernetes.io/projected/85cd2c7a-e29b-4d09-8a1c-b2581be481ea-kube-api-access-krjd5\") pod \"85cd2c7a-e29b-4d09-8a1c-b2581be481ea\" (UID: \"85cd2c7a-e29b-4d09-8a1c-b2581be481ea\") " Sep 6 00:44:44.961708 kubelet[2415]: I0906 00:44:44.961090 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-host-proc-sys-kernel\") pod \"e38beb11-f548-4cc3-86fb-1edec83f3295\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " Sep 6 00:44:44.961708 kubelet[2415]: I0906 00:44:44.961112 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e38beb11-f548-4cc3-86fb-1edec83f3295-hubble-tls\") pod \"e38beb11-f548-4cc3-86fb-1edec83f3295\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " Sep 6 00:44:44.961708 kubelet[2415]: I0906 00:44:44.961131 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-cilium-cgroup\") pod \"e38beb11-f548-4cc3-86fb-1edec83f3295\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " Sep 6 00:44:44.961708 kubelet[2415]: I0906 00:44:44.961153 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jbgv\" (UniqueName: \"kubernetes.io/projected/e38beb11-f548-4cc3-86fb-1edec83f3295-kube-api-access-7jbgv\") pod \"e38beb11-f548-4cc3-86fb-1edec83f3295\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " Sep 6 00:44:44.961708 kubelet[2415]: I0906 00:44:44.961180 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-host-proc-sys-net\") pod \"e38beb11-f548-4cc3-86fb-1edec83f3295\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " Sep 6 00:44:44.962023 kubelet[2415]: I0906 00:44:44.961202 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-bpf-maps\") pod \"e38beb11-f548-4cc3-86fb-1edec83f3295\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " Sep 6 00:44:44.962023 kubelet[2415]: I0906 00:44:44.961236 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e38beb11-f548-4cc3-86fb-1edec83f3295-clustermesh-secrets\") pod \"e38beb11-f548-4cc3-86fb-1edec83f3295\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " Sep 6 00:44:44.962023 kubelet[2415]: I0906 00:44:44.961264 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-lib-modules\") pod \"e38beb11-f548-4cc3-86fb-1edec83f3295\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " Sep 6 00:44:44.962023 kubelet[2415]: I0906 00:44:44.961287 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-cilium-run\") pod \"e38beb11-f548-4cc3-86fb-1edec83f3295\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " Sep 6 00:44:44.962023 kubelet[2415]: I0906 00:44:44.961307 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-hostproc\") pod \"e38beb11-f548-4cc3-86fb-1edec83f3295\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " Sep 6 00:44:44.962023 kubelet[2415]: I0906 00:44:44.961331 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-xtables-lock\") pod \"e38beb11-f548-4cc3-86fb-1edec83f3295\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " Sep 6 00:44:44.962277 kubelet[2415]: I0906 00:44:44.961364 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85cd2c7a-e29b-4d09-8a1c-b2581be481ea-cilium-config-path\") pod \"85cd2c7a-e29b-4d09-8a1c-b2581be481ea\" (UID: \"85cd2c7a-e29b-4d09-8a1c-b2581be481ea\") " Sep 6 00:44:44.962277 kubelet[2415]: I0906 00:44:44.961389 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-etc-cni-netd\") pod \"e38beb11-f548-4cc3-86fb-1edec83f3295\" (UID: \"e38beb11-f548-4cc3-86fb-1edec83f3295\") " Sep 6 00:44:44.962277 kubelet[2415]: I0906 00:44:44.961505 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e38beb11-f548-4cc3-86fb-1edec83f3295" (UID: "e38beb11-f548-4cc3-86fb-1edec83f3295"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:44.964011 kubelet[2415]: I0906 00:44:44.963964 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e38beb11-f548-4cc3-86fb-1edec83f3295-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e38beb11-f548-4cc3-86fb-1edec83f3295" (UID: "e38beb11-f548-4cc3-86fb-1edec83f3295"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:44:44.964158 kubelet[2415]: I0906 00:44:44.964047 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-cni-path" (OuterVolumeSpecName: "cni-path") pod "e38beb11-f548-4cc3-86fb-1edec83f3295" (UID: "e38beb11-f548-4cc3-86fb-1edec83f3295"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:44.964468 kubelet[2415]: I0906 00:44:44.964437 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e38beb11-f548-4cc3-86fb-1edec83f3295" (UID: "e38beb11-f548-4cc3-86fb-1edec83f3295"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:44.964542 kubelet[2415]: I0906 00:44:44.964484 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e38beb11-f548-4cc3-86fb-1edec83f3295" (UID: "e38beb11-f548-4cc3-86fb-1edec83f3295"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:44.965795 kubelet[2415]: I0906 00:44:44.965756 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e38beb11-f548-4cc3-86fb-1edec83f3295" (UID: "e38beb11-f548-4cc3-86fb-1edec83f3295"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:44.968995 kubelet[2415]: I0906 00:44:44.968927 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e38beb11-f548-4cc3-86fb-1edec83f3295" (UID: "e38beb11-f548-4cc3-86fb-1edec83f3295"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:44.969116 kubelet[2415]: I0906 00:44:44.968951 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e38beb11-f548-4cc3-86fb-1edec83f3295" (UID: "e38beb11-f548-4cc3-86fb-1edec83f3295"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:44.969235 kubelet[2415]: I0906 00:44:44.969203 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-hostproc" (OuterVolumeSpecName: "hostproc") pod "e38beb11-f548-4cc3-86fb-1edec83f3295" (UID: "e38beb11-f548-4cc3-86fb-1edec83f3295"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:44.969337 kubelet[2415]: I0906 00:44:44.969323 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e38beb11-f548-4cc3-86fb-1edec83f3295" (UID: "e38beb11-f548-4cc3-86fb-1edec83f3295"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:44.972328 kubelet[2415]: I0906 00:44:44.972302 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85cd2c7a-e29b-4d09-8a1c-b2581be481ea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "85cd2c7a-e29b-4d09-8a1c-b2581be481ea" (UID: "85cd2c7a-e29b-4d09-8a1c-b2581be481ea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:44:44.972568 kubelet[2415]: I0906 00:44:44.972548 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e38beb11-f548-4cc3-86fb-1edec83f3295-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e38beb11-f548-4cc3-86fb-1edec83f3295" (UID: "e38beb11-f548-4cc3-86fb-1edec83f3295"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:44:44.972725 kubelet[2415]: I0906 00:44:44.972704 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e38beb11-f548-4cc3-86fb-1edec83f3295" (UID: "e38beb11-f548-4cc3-86fb-1edec83f3295"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:44.972994 kubelet[2415]: I0906 00:44:44.972956 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e38beb11-f548-4cc3-86fb-1edec83f3295-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e38beb11-f548-4cc3-86fb-1edec83f3295" (UID: "e38beb11-f548-4cc3-86fb-1edec83f3295"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:44:44.973559 kubelet[2415]: I0906 00:44:44.973528 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85cd2c7a-e29b-4d09-8a1c-b2581be481ea-kube-api-access-krjd5" (OuterVolumeSpecName: "kube-api-access-krjd5") pod "85cd2c7a-e29b-4d09-8a1c-b2581be481ea" (UID: "85cd2c7a-e29b-4d09-8a1c-b2581be481ea"). InnerVolumeSpecName "kube-api-access-krjd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:44:44.975308 kubelet[2415]: I0906 00:44:44.975278 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e38beb11-f548-4cc3-86fb-1edec83f3295-kube-api-access-7jbgv" (OuterVolumeSpecName: "kube-api-access-7jbgv") pod "e38beb11-f548-4cc3-86fb-1edec83f3295" (UID: "e38beb11-f548-4cc3-86fb-1edec83f3295"). InnerVolumeSpecName "kube-api-access-7jbgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:44:45.062073 kubelet[2415]: I0906 00:44:45.062001 2415 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krjd5\" (UniqueName: \"kubernetes.io/projected/85cd2c7a-e29b-4d09-8a1c-b2581be481ea-kube-api-access-krjd5\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:45.062073 kubelet[2415]: I0906 00:44:45.062061 2415 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e38beb11-f548-4cc3-86fb-1edec83f3295-cilium-config-path\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:45.062073 kubelet[2415]: I0906 00:44:45.062078 2415 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-cni-path\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:45.062427 kubelet[2415]: I0906 00:44:45.062094 2415 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:45.062427 kubelet[2415]: I0906 00:44:45.062114 2415 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e38beb11-f548-4cc3-86fb-1edec83f3295-hubble-tls\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:45.062427 kubelet[2415]: I0906 00:44:45.062129 2415 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-cilium-cgroup\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:45.062427 kubelet[2415]: I0906 00:44:45.062142 2415 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jbgv\" (UniqueName: \"kubernetes.io/projected/e38beb11-f548-4cc3-86fb-1edec83f3295-kube-api-access-7jbgv\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:45.062427 kubelet[2415]: I0906 00:44:45.062157 2415 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-host-proc-sys-net\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:45.062427 kubelet[2415]: I0906 00:44:45.062169 2415 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-bpf-maps\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:45.062427 kubelet[2415]: I0906 00:44:45.062180 2415 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e38beb11-f548-4cc3-86fb-1edec83f3295-clustermesh-secrets\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:45.062427 kubelet[2415]: I0906 00:44:45.062192 2415 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-lib-modules\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:45.062633 kubelet[2415]: I0906 00:44:45.062204 2415 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-cilium-run\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:45.062633 kubelet[2415]: I0906 00:44:45.062217 2415 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-hostproc\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:45.062633 kubelet[2415]: I0906 00:44:45.062229 2415 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-xtables-lock\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:45.062633 kubelet[2415]: I0906 00:44:45.062241 2415 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85cd2c7a-e29b-4d09-8a1c-b2581be481ea-cilium-config-path\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:45.062633 kubelet[2415]: I0906 00:44:45.062252 2415 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e38beb11-f548-4cc3-86fb-1edec83f3295-etc-cni-netd\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:45.174250 kubelet[2415]: I0906 00:44:45.174106 2415 scope.go:117] "RemoveContainer" containerID="4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3" Sep 6 00:44:45.180844 env[1434]: time="2025-09-06T00:44:45.180414587Z" level=info msg="RemoveContainer for \"4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3\"" Sep 6 00:44:45.181472 systemd[1]: Removed slice kubepods-besteffort-pod85cd2c7a_e29b_4d09_8a1c_b2581be481ea.slice. Sep 6 00:44:45.191707 systemd[1]: Removed slice kubepods-burstable-pode38beb11_f548_4cc3_86fb_1edec83f3295.slice. Sep 6 00:44:45.191840 systemd[1]: kubepods-burstable-pode38beb11_f548_4cc3_86fb_1edec83f3295.slice: Consumed 7.862s CPU time. Sep 6 00:44:45.212337 env[1434]: time="2025-09-06T00:44:45.212274699Z" level=info msg="RemoveContainer for \"4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3\" returns successfully" Sep 6 00:44:45.213955 kubelet[2415]: I0906 00:44:45.212877 2415 scope.go:117] "RemoveContainer" containerID="4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3" Sep 6 00:44:45.214137 env[1434]: time="2025-09-06T00:44:45.213433103Z" level=error msg="ContainerStatus for \"4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3\": not found" Sep 6 00:44:45.215488 kubelet[2415]: E0906 00:44:45.215449 2415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3\": not found" containerID="4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3" Sep 6 00:44:45.215812 kubelet[2415]: I0906 00:44:45.215673 2415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3"} err="failed to get container status \"4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ba34b2aa6c8f59814014ae5f6c22d38a931336c79dd832dda9a1a7bf34362d3\": not found" Sep 6 00:44:45.215964 kubelet[2415]: I0906 00:44:45.215946 2415 scope.go:117] "RemoveContainer" containerID="95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8" Sep 6 00:44:45.217427 env[1434]: time="2025-09-06T00:44:45.217396117Z" level=info msg="RemoveContainer for \"95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8\"" Sep 6 00:44:45.225630 env[1434]: time="2025-09-06T00:44:45.225593046Z" level=info msg="RemoveContainer for \"95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8\" returns successfully" Sep 6 00:44:45.225872 kubelet[2415]: I0906 00:44:45.225849 2415 scope.go:117] "RemoveContainer" containerID="da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59" Sep 6 00:44:45.227140 env[1434]: time="2025-09-06T00:44:45.227109551Z" level=info msg="RemoveContainer for \"da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59\"" Sep 6 00:44:45.236792 env[1434]: time="2025-09-06T00:44:45.236752285Z" level=info msg="RemoveContainer for \"da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59\" returns successfully" Sep 6 00:44:45.237035 kubelet[2415]: I0906 00:44:45.237012 2415 scope.go:117] "RemoveContainer" containerID="b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad" Sep 6 00:44:45.238126 env[1434]: time="2025-09-06T00:44:45.238097090Z" level=info msg="RemoveContainer for \"b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad\"" Sep 6 00:44:45.247504 env[1434]: time="2025-09-06T00:44:45.247469423Z" level=info msg="RemoveContainer for \"b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad\" returns successfully" Sep 6 00:44:45.247639 kubelet[2415]: I0906 00:44:45.247618 2415 scope.go:117] "RemoveContainer" containerID="c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d" Sep 6 00:44:45.248629 env[1434]: time="2025-09-06T00:44:45.248601027Z" level=info msg="RemoveContainer for \"c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d\"" Sep 6 00:44:45.258226 env[1434]: time="2025-09-06T00:44:45.258191761Z" level=info msg="RemoveContainer for \"c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d\" returns successfully" Sep 6 00:44:45.258368 kubelet[2415]: I0906 00:44:45.258348 2415 scope.go:117] "RemoveContainer" containerID="73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d" Sep 6 00:44:45.259349 env[1434]: time="2025-09-06T00:44:45.259322064Z" level=info msg="RemoveContainer for \"73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d\"" Sep 6 00:44:45.269428 env[1434]: time="2025-09-06T00:44:45.269395000Z" level=info msg="RemoveContainer for \"73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d\" returns successfully" Sep 6 00:44:45.269603 kubelet[2415]: I0906 00:44:45.269586 2415 scope.go:117] "RemoveContainer" containerID="95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8" Sep 6 00:44:45.269948 env[1434]: time="2025-09-06T00:44:45.269886302Z" level=error msg="ContainerStatus for \"95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8\": not found" Sep 6 00:44:45.270144 kubelet[2415]: E0906 00:44:45.270118 2415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8\": not found" containerID="95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8" Sep 6 00:44:45.270226 kubelet[2415]: I0906 00:44:45.270167 2415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8"} err="failed to get container status \"95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8\": rpc error: code = NotFound desc = an error occurred when try to find container \"95f673027b9fd27159b05818edfdd9adea7cc685e8bac2f43138a783b8043bd8\": not found" Sep 6 00:44:45.270226 kubelet[2415]: I0906 00:44:45.270198 2415 scope.go:117] "RemoveContainer" containerID="da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59" Sep 6 00:44:45.270516 env[1434]: time="2025-09-06T00:44:45.270454704Z" level=error msg="ContainerStatus for \"da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59\": not found" Sep 6 00:44:45.270666 kubelet[2415]: E0906 00:44:45.270639 2415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59\": not found" containerID="da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59" Sep 6 00:44:45.270742 kubelet[2415]: I0906 00:44:45.270672 2415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59"} err="failed to get container status \"da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59\": rpc error: code = NotFound desc = an error occurred when try to find container \"da18f9958015b22697919ea03e71840875bc79d25b34dd619a4ab039a4c5de59\": not found" Sep 6 00:44:45.270742 kubelet[2415]: I0906 00:44:45.270714 2415 scope.go:117] "RemoveContainer" containerID="b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad" Sep 6 00:44:45.271462 env[1434]: time="2025-09-06T00:44:45.271410907Z" level=error msg="ContainerStatus for \"b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad\": not found" Sep 6 00:44:45.271890 kubelet[2415]: E0906 00:44:45.271865 2415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad\": not found" containerID="b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad" Sep 6 00:44:45.271986 kubelet[2415]: I0906 00:44:45.271895 2415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad"} err="failed to get container status \"b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"b32a8ee28f48f7b26dd6f1181071da27f5b57c2cd000c4d40cb19c42bb69f5ad\": not found" Sep 6 00:44:45.271986 kubelet[2415]: I0906 00:44:45.271918 2415 scope.go:117] "RemoveContainer" containerID="c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d" Sep 6 00:44:45.272339 env[1434]: time="2025-09-06T00:44:45.272277210Z" level=error msg="ContainerStatus for \"c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d\": not found" Sep 6 00:44:45.272714 kubelet[2415]: E0906 00:44:45.272690 2415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d\": not found" containerID="c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d" Sep 6 00:44:45.272808 kubelet[2415]: I0906 00:44:45.272728 2415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d"} err="failed to get container status \"c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d\": rpc error: code = NotFound desc = an error occurred when try to find container \"c55257ced28828abe285c83df3b3312105d0437d8d7bbf214dcf1925a5a3f72d\": not found" Sep 6 00:44:45.272808 kubelet[2415]: I0906 00:44:45.272749 2415 scope.go:117] "RemoveContainer" containerID="73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d" Sep 6 00:44:45.279381 env[1434]: time="2025-09-06T00:44:45.279321435Z" level=error msg="ContainerStatus for \"73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d\": not found" Sep 6 00:44:45.279888 kubelet[2415]: E0906 00:44:45.279861 2415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d\": not found" containerID="73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d" Sep 6 00:44:45.280073 kubelet[2415]: I0906 00:44:45.280047 2415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d"} err="failed to get container status \"73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d\": rpc error: code = NotFound desc = an error occurred when try to find container \"73bf9908910cd56f0580a17f2ba4c48d27ba3b8ab4820d0fadf09bd780a7900d\": not found" Sep 6 00:44:45.638169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3009c33596fa996c3ff2426967b4463cf4b35a96ba46f4b946ab7c2e9fb6206d-rootfs.mount: Deactivated successfully. Sep 6 00:44:45.638311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e-rootfs.mount: Deactivated successfully. Sep 6 00:44:45.638385 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa21e705e3de7ba61361bbf6da8dfcf5c28636ed6be45e56fb943be77eaf640e-shm.mount: Deactivated successfully. Sep 6 00:44:45.638455 systemd[1]: var-lib-kubelet-pods-85cd2c7a\x2de29b\x2d4d09\x2d8a1c\x2db2581be481ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkrjd5.mount: Deactivated successfully. Sep 6 00:44:45.638538 systemd[1]: var-lib-kubelet-pods-e38beb11\x2df548\x2d4cc3\x2d86fb\x2d1edec83f3295-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7jbgv.mount: Deactivated successfully. Sep 6 00:44:45.638627 systemd[1]: var-lib-kubelet-pods-e38beb11\x2df548\x2d4cc3\x2d86fb\x2d1edec83f3295-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:44:45.638713 systemd[1]: var-lib-kubelet-pods-e38beb11\x2df548\x2d4cc3\x2d86fb\x2d1edec83f3295-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:44:45.656260 kubelet[2415]: I0906 00:44:45.656202 2415 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85cd2c7a-e29b-4d09-8a1c-b2581be481ea" path="/var/lib/kubelet/pods/85cd2c7a-e29b-4d09-8a1c-b2581be481ea/volumes" Sep 6 00:44:45.656739 kubelet[2415]: I0906 00:44:45.656712 2415 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e38beb11-f548-4cc3-86fb-1edec83f3295" path="/var/lib/kubelet/pods/e38beb11-f548-4cc3-86fb-1edec83f3295/volumes" Sep 6 00:44:46.039771 kubelet[2415]: E0906 00:44:46.039715 2415 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:44:46.645706 sshd[3974]: pam_unix(sshd:session): session closed for user core Sep 6 00:44:46.649503 systemd[1]: sshd@20-10.200.8.17:22-10.200.16.10:37802.service: Deactivated successfully. Sep 6 00:44:46.650667 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:44:46.651524 systemd-logind[1421]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:44:46.652516 systemd-logind[1421]: Removed session 23. Sep 6 00:44:46.758020 systemd[1]: Started sshd@21-10.200.8.17:22-10.200.16.10:37812.service. Sep 6 00:44:47.391030 sshd[4143]: Accepted publickey for core from 10.200.16.10 port 37812 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:44:47.392724 sshd[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:44:47.398402 systemd[1]: Started session-24.scope. Sep 6 00:44:47.398988 systemd-logind[1421]: New session 24 of user core. Sep 6 00:44:48.320430 kubelet[2415]: E0906 00:44:48.316943 2415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e38beb11-f548-4cc3-86fb-1edec83f3295" containerName="mount-cgroup" Sep 6 00:44:48.320430 kubelet[2415]: E0906 00:44:48.316996 2415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e38beb11-f548-4cc3-86fb-1edec83f3295" containerName="mount-bpf-fs" Sep 6 00:44:48.320430 kubelet[2415]: E0906 00:44:48.317008 2415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="85cd2c7a-e29b-4d09-8a1c-b2581be481ea" containerName="cilium-operator" Sep 6 00:44:48.320430 kubelet[2415]: E0906 00:44:48.317017 2415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e38beb11-f548-4cc3-86fb-1edec83f3295" containerName="cilium-agent" Sep 6 00:44:48.320430 kubelet[2415]: E0906 00:44:48.317028 2415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e38beb11-f548-4cc3-86fb-1edec83f3295" containerName="apply-sysctl-overwrites" Sep 6 00:44:48.320430 kubelet[2415]: E0906 00:44:48.317038 2415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e38beb11-f548-4cc3-86fb-1edec83f3295" containerName="clean-cilium-state" Sep 6 00:44:48.320430 kubelet[2415]: I0906 00:44:48.317103 2415 memory_manager.go:354] "RemoveStaleState removing state" podUID="e38beb11-f548-4cc3-86fb-1edec83f3295" containerName="cilium-agent" Sep 6 00:44:48.320430 kubelet[2415]: I0906 00:44:48.317116 2415 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cd2c7a-e29b-4d09-8a1c-b2581be481ea" containerName="cilium-operator" Sep 6 00:44:48.327217 systemd[1]: Created slice kubepods-burstable-pod5cbb2571_6e30_43fc_a763_face032e2d3c.slice. Sep 6 00:44:48.383751 kubelet[2415]: I0906 00:44:48.383693 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-cni-path\") pod \"cilium-wr2q2\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " pod="kube-system/cilium-wr2q2" Sep 6 00:44:48.383751 kubelet[2415]: I0906 00:44:48.383742 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-etc-cni-netd\") pod \"cilium-wr2q2\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " pod="kube-system/cilium-wr2q2" Sep 6 00:44:48.384076 kubelet[2415]: I0906 00:44:48.383771 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-xtables-lock\") pod \"cilium-wr2q2\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " pod="kube-system/cilium-wr2q2" Sep 6 00:44:48.384076 kubelet[2415]: I0906 00:44:48.383797 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cbb2571-6e30-43fc-a763-face032e2d3c-cilium-config-path\") pod \"cilium-wr2q2\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " pod="kube-system/cilium-wr2q2" Sep 6 00:44:48.384076 kubelet[2415]: I0906 00:44:48.383833 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5cbb2571-6e30-43fc-a763-face032e2d3c-cilium-ipsec-secrets\") pod \"cilium-wr2q2\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " pod="kube-system/cilium-wr2q2" Sep 6 00:44:48.384076 kubelet[2415]: I0906 00:44:48.383860 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj2tt\" (UniqueName: \"kubernetes.io/projected/5cbb2571-6e30-43fc-a763-face032e2d3c-kube-api-access-vj2tt\") pod \"cilium-wr2q2\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " pod="kube-system/cilium-wr2q2" Sep 6 00:44:48.384076 kubelet[2415]: I0906 00:44:48.383889 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-cilium-run\") pod \"cilium-wr2q2\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " pod="kube-system/cilium-wr2q2" Sep 6 00:44:48.384278 kubelet[2415]: I0906 00:44:48.383911 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-host-proc-sys-kernel\") pod \"cilium-wr2q2\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " pod="kube-system/cilium-wr2q2" Sep 6 00:44:48.384278 kubelet[2415]: I0906 00:44:48.383930 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-bpf-maps\") pod \"cilium-wr2q2\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " pod="kube-system/cilium-wr2q2" Sep 6 00:44:48.384278 kubelet[2415]: I0906 00:44:48.383951 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-cilium-cgroup\") pod \"cilium-wr2q2\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " pod="kube-system/cilium-wr2q2" Sep 6 00:44:48.384278 kubelet[2415]: I0906 00:44:48.383972 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-lib-modules\") pod \"cilium-wr2q2\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " pod="kube-system/cilium-wr2q2" Sep 6 00:44:48.384278 kubelet[2415]: I0906 00:44:48.383994 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cbb2571-6e30-43fc-a763-face032e2d3c-hubble-tls\") pod \"cilium-wr2q2\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " pod="kube-system/cilium-wr2q2" Sep 6 00:44:48.384278 kubelet[2415]: I0906 00:44:48.384019 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-hostproc\") pod \"cilium-wr2q2\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " pod="kube-system/cilium-wr2q2" Sep 6 00:44:48.384439 kubelet[2415]: I0906 00:44:48.384041 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cbb2571-6e30-43fc-a763-face032e2d3c-clustermesh-secrets\") pod \"cilium-wr2q2\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " pod="kube-system/cilium-wr2q2" Sep 6 00:44:48.384439 kubelet[2415]: I0906 00:44:48.384070 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-host-proc-sys-net\") pod \"cilium-wr2q2\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " pod="kube-system/cilium-wr2q2" Sep 6 00:44:48.395006 sshd[4143]: pam_unix(sshd:session): session closed for user core Sep 6 00:44:48.398775 systemd[1]: sshd@21-10.200.8.17:22-10.200.16.10:37812.service: Deactivated successfully. Sep 6 00:44:48.399830 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:44:48.400603 systemd-logind[1421]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:44:48.401575 systemd-logind[1421]: Removed session 24. Sep 6 00:44:48.506330 systemd[1]: Started sshd@22-10.200.8.17:22-10.200.16.10:37828.service. Sep 6 00:44:48.633674 env[1434]: time="2025-09-06T00:44:48.633513396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wr2q2,Uid:5cbb2571-6e30-43fc-a763-face032e2d3c,Namespace:kube-system,Attempt:0,}" Sep 6 00:44:48.685194 env[1434]: time="2025-09-06T00:44:48.685115492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:44:48.685882 env[1434]: time="2025-09-06T00:44:48.685154692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:44:48.685882 env[1434]: time="2025-09-06T00:44:48.685169792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:44:48.685882 env[1434]: time="2025-09-06T00:44:48.685425693Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e2dc553d76dda710b02e1abd0dd854e4fc9df8e80ab4087ddda8d2e1809adc3 pid=4168 runtime=io.containerd.runc.v2 Sep 6 00:44:48.700539 systemd[1]: Started cri-containerd-5e2dc553d76dda710b02e1abd0dd854e4fc9df8e80ab4087ddda8d2e1809adc3.scope. Sep 6 00:44:48.742147 env[1434]: time="2025-09-06T00:44:48.742083508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wr2q2,Uid:5cbb2571-6e30-43fc-a763-face032e2d3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e2dc553d76dda710b02e1abd0dd854e4fc9df8e80ab4087ddda8d2e1809adc3\"" Sep 6 00:44:48.747221 env[1434]: time="2025-09-06T00:44:48.747176428Z" level=info msg="CreateContainer within sandbox \"5e2dc553d76dda710b02e1abd0dd854e4fc9df8e80ab4087ddda8d2e1809adc3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:44:48.787293 env[1434]: time="2025-09-06T00:44:48.787158480Z" level=info msg="CreateContainer within sandbox \"5e2dc553d76dda710b02e1abd0dd854e4fc9df8e80ab4087ddda8d2e1809adc3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16\"" Sep 6 00:44:48.789391 env[1434]: time="2025-09-06T00:44:48.789350888Z" level=info msg="StartContainer for \"dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16\"" Sep 6 00:44:48.811481 systemd[1]: Started cri-containerd-dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16.scope. Sep 6 00:44:48.830177 systemd[1]: cri-containerd-dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16.scope: Deactivated successfully. Sep 6 00:44:48.902000 env[1434]: time="2025-09-06T00:44:48.901808815Z" level=info msg="shim disconnected" id=dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16 Sep 6 00:44:48.902609 env[1434]: time="2025-09-06T00:44:48.902569718Z" level=warning msg="cleaning up after shim disconnected" id=dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16 namespace=k8s.io Sep 6 00:44:48.902746 env[1434]: time="2025-09-06T00:44:48.902722919Z" level=info msg="cleaning up dead shim" Sep 6 00:44:48.914796 env[1434]: time="2025-09-06T00:44:48.914749365Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:44:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4227 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:44:48Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:44:48.915253 env[1434]: time="2025-09-06T00:44:48.915119666Z" level=error msg="copy shim log" error="read /proc/self/fd/34: file already closed" Sep 6 00:44:48.916757 env[1434]: time="2025-09-06T00:44:48.916123470Z" level=error msg="Failed to pipe stdout of container \"dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16\"" error="reading from a closed fifo" Sep 6 00:44:48.917015 env[1434]: time="2025-09-06T00:44:48.916968273Z" level=error msg="Failed to pipe stderr of container \"dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16\"" error="reading from a closed fifo" Sep 6 00:44:48.921437 env[1434]: time="2025-09-06T00:44:48.921375990Z" level=error msg="StartContainer for \"dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:44:48.921746 kubelet[2415]: E0906 00:44:48.921703 2415 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16" Sep 6 00:44:48.922007 kubelet[2415]: E0906 00:44:48.921975 2415 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 6 00:44:48.922007 kubelet[2415]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:44:48.922007 kubelet[2415]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:44:48.922007 kubelet[2415]: rm /hostbin/cilium-mount Sep 6 00:44:48.922204 kubelet[2415]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vj2tt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-wr2q2_kube-system(5cbb2571-6e30-43fc-a763-face032e2d3c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:44:48.922204 kubelet[2415]: > logger="UnhandledError" Sep 6 00:44:48.923619 kubelet[2415]: E0906 00:44:48.923574 2415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wr2q2" podUID="5cbb2571-6e30-43fc-a763-face032e2d3c" Sep 6 00:44:49.162660 sshd[4159]: Accepted publickey for core from 10.200.16.10 port 37828 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:44:49.163960 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:44:49.169701 systemd[1]: Started session-25.scope. Sep 6 00:44:49.170204 systemd-logind[1421]: New session 25 of user core. Sep 6 00:44:49.204242 env[1434]: time="2025-09-06T00:44:49.204172183Z" level=info msg="CreateContainer within sandbox \"5e2dc553d76dda710b02e1abd0dd854e4fc9df8e80ab4087ddda8d2e1809adc3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Sep 6 00:44:49.248580 env[1434]: time="2025-09-06T00:44:49.248509956Z" level=info msg="CreateContainer within sandbox \"5e2dc553d76dda710b02e1abd0dd854e4fc9df8e80ab4087ddda8d2e1809adc3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"0bff838889b65e8ca4c98e36cf3ce8bfe0fd460cfcaa6d2cdc561935cf43aefc\"" Sep 6 00:44:49.250933 env[1434]: time="2025-09-06T00:44:49.249262559Z" level=info msg="StartContainer for \"0bff838889b65e8ca4c98e36cf3ce8bfe0fd460cfcaa6d2cdc561935cf43aefc\"" Sep 6 00:44:49.271053 systemd[1]: Started cri-containerd-0bff838889b65e8ca4c98e36cf3ce8bfe0fd460cfcaa6d2cdc561935cf43aefc.scope. Sep 6 00:44:49.285777 systemd[1]: cri-containerd-0bff838889b65e8ca4c98e36cf3ce8bfe0fd460cfcaa6d2cdc561935cf43aefc.scope: Deactivated successfully. Sep 6 00:44:49.321044 env[1434]: time="2025-09-06T00:44:49.320956938Z" level=info msg="shim disconnected" id=0bff838889b65e8ca4c98e36cf3ce8bfe0fd460cfcaa6d2cdc561935cf43aefc Sep 6 00:44:49.321044 env[1434]: time="2025-09-06T00:44:49.321051138Z" level=warning msg="cleaning up after shim disconnected" id=0bff838889b65e8ca4c98e36cf3ce8bfe0fd460cfcaa6d2cdc561935cf43aefc namespace=k8s.io Sep 6 00:44:49.321044 env[1434]: time="2025-09-06T00:44:49.321064038Z" level=info msg="cleaning up dead shim" Sep 6 00:44:49.332381 env[1434]: time="2025-09-06T00:44:49.332306282Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:44:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4263 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:44:49Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0bff838889b65e8ca4c98e36cf3ce8bfe0fd460cfcaa6d2cdc561935cf43aefc/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:44:49.332738 env[1434]: time="2025-09-06T00:44:49.332664783Z" level=error msg="copy shim log" error="read /proc/self/fd/34: file already closed" Sep 6 00:44:49.335923 env[1434]: time="2025-09-06T00:44:49.335863596Z" level=error msg="Failed to pipe stdout of container \"0bff838889b65e8ca4c98e36cf3ce8bfe0fd460cfcaa6d2cdc561935cf43aefc\"" error="reading from a closed fifo" Sep 6 00:44:49.338151 env[1434]: time="2025-09-06T00:44:49.338097605Z" level=error msg="Failed to pipe stderr of container \"0bff838889b65e8ca4c98e36cf3ce8bfe0fd460cfcaa6d2cdc561935cf43aefc\"" error="reading from a closed fifo" Sep 6 00:44:49.343110 env[1434]: time="2025-09-06T00:44:49.343046324Z" level=error msg="StartContainer for \"0bff838889b65e8ca4c98e36cf3ce8bfe0fd460cfcaa6d2cdc561935cf43aefc\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:44:49.343434 kubelet[2415]: E0906 00:44:49.343377 2415 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0bff838889b65e8ca4c98e36cf3ce8bfe0fd460cfcaa6d2cdc561935cf43aefc" Sep 6 00:44:49.343942 kubelet[2415]: E0906 00:44:49.343600 2415 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 6 00:44:49.343942 kubelet[2415]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:44:49.343942 kubelet[2415]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:44:49.343942 kubelet[2415]: rm /hostbin/cilium-mount Sep 6 00:44:49.343942 kubelet[2415]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vj2tt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-wr2q2_kube-system(5cbb2571-6e30-43fc-a763-face032e2d3c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:44:49.343942 kubelet[2415]: > logger="UnhandledError" Sep 6 00:44:49.344989 kubelet[2415]: E0906 00:44:49.344937 2415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wr2q2" podUID="5cbb2571-6e30-43fc-a763-face032e2d3c" Sep 6 00:44:49.493958 kubelet[2415]: I0906 00:44:49.491385 2415 setters.go:600] "Node became not ready" node="ci-3510.3.8-n-cde0707216" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:44:49Z","lastTransitionTime":"2025-09-06T00:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:44:49.708236 sshd[4159]: pam_unix(sshd:session): session closed for user core Sep 6 00:44:49.711860 systemd[1]: sshd@22-10.200.8.17:22-10.200.16.10:37828.service: Deactivated successfully. Sep 6 00:44:49.712978 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:44:49.713739 systemd-logind[1421]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:44:49.714756 systemd-logind[1421]: Removed session 25. Sep 6 00:44:49.816050 systemd[1]: Started sshd@23-10.200.8.17:22-10.200.16.10:37844.service. Sep 6 00:44:50.202641 kubelet[2415]: I0906 00:44:50.202335 2415 scope.go:117] "RemoveContainer" containerID="dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16" Sep 6 00:44:50.203300 env[1434]: time="2025-09-06T00:44:50.203236490Z" level=info msg="StopPodSandbox for \"5e2dc553d76dda710b02e1abd0dd854e4fc9df8e80ab4087ddda8d2e1809adc3\"" Sep 6 00:44:50.203915 env[1434]: time="2025-09-06T00:44:50.203882093Z" level=info msg="Container to stop \"dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:44:50.204174 env[1434]: time="2025-09-06T00:44:50.204142694Z" level=info msg="RemoveContainer for \"dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16\"" Sep 6 00:44:50.205391 env[1434]: time="2025-09-06T00:44:50.204146894Z" level=info msg="Container to stop \"0bff838889b65e8ca4c98e36cf3ce8bfe0fd460cfcaa6d2cdc561935cf43aefc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:44:50.210988 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e2dc553d76dda710b02e1abd0dd854e4fc9df8e80ab4087ddda8d2e1809adc3-shm.mount: Deactivated successfully. Sep 6 00:44:50.218292 env[1434]: time="2025-09-06T00:44:50.218139750Z" level=info msg="RemoveContainer for \"dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16\" returns successfully" Sep 6 00:44:50.231032 systemd[1]: cri-containerd-5e2dc553d76dda710b02e1abd0dd854e4fc9df8e80ab4087ddda8d2e1809adc3.scope: Deactivated successfully. Sep 6 00:44:50.272552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e2dc553d76dda710b02e1abd0dd854e4fc9df8e80ab4087ddda8d2e1809adc3-rootfs.mount: Deactivated successfully. Sep 6 00:44:50.290439 env[1434]: time="2025-09-06T00:44:50.290359637Z" level=info msg="shim disconnected" id=5e2dc553d76dda710b02e1abd0dd854e4fc9df8e80ab4087ddda8d2e1809adc3 Sep 6 00:44:50.290439 env[1434]: time="2025-09-06T00:44:50.290432038Z" level=warning msg="cleaning up after shim disconnected" id=5e2dc553d76dda710b02e1abd0dd854e4fc9df8e80ab4087ddda8d2e1809adc3 namespace=k8s.io Sep 6 00:44:50.290439 env[1434]: time="2025-09-06T00:44:50.290444338Z" level=info msg="cleaning up dead shim" Sep 6 00:44:50.301246 env[1434]: time="2025-09-06T00:44:50.301182681Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:44:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4305 runtime=io.containerd.runc.v2\n" Sep 6 00:44:50.301636 env[1434]: time="2025-09-06T00:44:50.301595882Z" level=info msg="TearDown network for sandbox \"5e2dc553d76dda710b02e1abd0dd854e4fc9df8e80ab4087ddda8d2e1809adc3\" successfully" Sep 6 00:44:50.301636 env[1434]: time="2025-09-06T00:44:50.301634482Z" level=info msg="StopPodSandbox for \"5e2dc553d76dda710b02e1abd0dd854e4fc9df8e80ab4087ddda8d2e1809adc3\" returns successfully" Sep 6 00:44:50.397749 kubelet[2415]: I0906 00:44:50.397680 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-xtables-lock\") pod \"5cbb2571-6e30-43fc-a763-face032e2d3c\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " Sep 6 00:44:50.397749 kubelet[2415]: I0906 00:44:50.397747 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-bpf-maps\") pod \"5cbb2571-6e30-43fc-a763-face032e2d3c\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " Sep 6 00:44:50.398456 kubelet[2415]: I0906 00:44:50.397787 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5cbb2571-6e30-43fc-a763-face032e2d3c-cilium-ipsec-secrets\") pod \"5cbb2571-6e30-43fc-a763-face032e2d3c\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " Sep 6 00:44:50.398456 kubelet[2415]: I0906 00:44:50.397807 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-host-proc-sys-kernel\") pod \"5cbb2571-6e30-43fc-a763-face032e2d3c\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " Sep 6 00:44:50.398456 kubelet[2415]: I0906 00:44:50.397858 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cbb2571-6e30-43fc-a763-face032e2d3c-clustermesh-secrets\") pod \"5cbb2571-6e30-43fc-a763-face032e2d3c\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " Sep 6 00:44:50.398456 kubelet[2415]: I0906 00:44:50.397877 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-etc-cni-netd\") pod \"5cbb2571-6e30-43fc-a763-face032e2d3c\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " Sep 6 00:44:50.398456 kubelet[2415]: I0906 00:44:50.397900 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cbb2571-6e30-43fc-a763-face032e2d3c-cilium-config-path\") pod \"5cbb2571-6e30-43fc-a763-face032e2d3c\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " Sep 6 00:44:50.398456 kubelet[2415]: I0906 00:44:50.397925 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-cni-path\") pod \"5cbb2571-6e30-43fc-a763-face032e2d3c\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " Sep 6 00:44:50.398456 kubelet[2415]: I0906 00:44:50.397947 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-cilium-cgroup\") pod \"5cbb2571-6e30-43fc-a763-face032e2d3c\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " Sep 6 00:44:50.398456 kubelet[2415]: I0906 00:44:50.397971 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-lib-modules\") pod \"5cbb2571-6e30-43fc-a763-face032e2d3c\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " Sep 6 00:44:50.398456 kubelet[2415]: I0906 00:44:50.398003 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-host-proc-sys-net\") pod \"5cbb2571-6e30-43fc-a763-face032e2d3c\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " Sep 6 00:44:50.398456 kubelet[2415]: I0906 00:44:50.398025 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-cilium-run\") pod \"5cbb2571-6e30-43fc-a763-face032e2d3c\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " Sep 6 00:44:50.398456 kubelet[2415]: I0906 00:44:50.398050 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj2tt\" (UniqueName: \"kubernetes.io/projected/5cbb2571-6e30-43fc-a763-face032e2d3c-kube-api-access-vj2tt\") pod \"5cbb2571-6e30-43fc-a763-face032e2d3c\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " Sep 6 00:44:50.398456 kubelet[2415]: I0906 00:44:50.398071 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-hostproc\") pod \"5cbb2571-6e30-43fc-a763-face032e2d3c\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " Sep 6 00:44:50.398456 kubelet[2415]: I0906 00:44:50.398101 2415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cbb2571-6e30-43fc-a763-face032e2d3c-hubble-tls\") pod \"5cbb2571-6e30-43fc-a763-face032e2d3c\" (UID: \"5cbb2571-6e30-43fc-a763-face032e2d3c\") " Sep 6 00:44:50.399324 kubelet[2415]: I0906 00:44:50.399276 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-cni-path" (OuterVolumeSpecName: "cni-path") pod "5cbb2571-6e30-43fc-a763-face032e2d3c" (UID: "5cbb2571-6e30-43fc-a763-face032e2d3c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:50.399441 kubelet[2415]: I0906 00:44:50.399341 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5cbb2571-6e30-43fc-a763-face032e2d3c" (UID: "5cbb2571-6e30-43fc-a763-face032e2d3c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:50.399441 kubelet[2415]: I0906 00:44:50.399363 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5cbb2571-6e30-43fc-a763-face032e2d3c" (UID: "5cbb2571-6e30-43fc-a763-face032e2d3c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:50.399441 kubelet[2415]: I0906 00:44:50.399383 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5cbb2571-6e30-43fc-a763-face032e2d3c" (UID: "5cbb2571-6e30-43fc-a763-face032e2d3c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:50.399441 kubelet[2415]: I0906 00:44:50.399402 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5cbb2571-6e30-43fc-a763-face032e2d3c" (UID: "5cbb2571-6e30-43fc-a763-face032e2d3c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:50.405712 systemd[1]: var-lib-kubelet-pods-5cbb2571\x2d6e30\x2d43fc\x2da763\x2dface032e2d3c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:44:50.415476 kubelet[2415]: I0906 00:44:50.406542 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cbb2571-6e30-43fc-a763-face032e2d3c-kube-api-access-vj2tt" (OuterVolumeSpecName: "kube-api-access-vj2tt") pod "5cbb2571-6e30-43fc-a763-face032e2d3c" (UID: "5cbb2571-6e30-43fc-a763-face032e2d3c"). InnerVolumeSpecName "kube-api-access-vj2tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:44:50.415476 kubelet[2415]: I0906 00:44:50.406723 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-hostproc" (OuterVolumeSpecName: "hostproc") pod "5cbb2571-6e30-43fc-a763-face032e2d3c" (UID: "5cbb2571-6e30-43fc-a763-face032e2d3c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:50.415476 kubelet[2415]: I0906 00:44:50.407476 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cbb2571-6e30-43fc-a763-face032e2d3c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5cbb2571-6e30-43fc-a763-face032e2d3c" (UID: "5cbb2571-6e30-43fc-a763-face032e2d3c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:44:50.415476 kubelet[2415]: I0906 00:44:50.412331 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5cbb2571-6e30-43fc-a763-face032e2d3c" (UID: "5cbb2571-6e30-43fc-a763-face032e2d3c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:50.415476 kubelet[2415]: I0906 00:44:50.412392 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5cbb2571-6e30-43fc-a763-face032e2d3c" (UID: "5cbb2571-6e30-43fc-a763-face032e2d3c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:50.415476 kubelet[2415]: I0906 00:44:50.412495 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cbb2571-6e30-43fc-a763-face032e2d3c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5cbb2571-6e30-43fc-a763-face032e2d3c" (UID: "5cbb2571-6e30-43fc-a763-face032e2d3c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:44:50.418585 kubelet[2415]: I0906 00:44:50.416217 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cbb2571-6e30-43fc-a763-face032e2d3c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5cbb2571-6e30-43fc-a763-face032e2d3c" (UID: "5cbb2571-6e30-43fc-a763-face032e2d3c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:44:50.418585 kubelet[2415]: I0906 00:44:50.416297 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5cbb2571-6e30-43fc-a763-face032e2d3c" (UID: "5cbb2571-6e30-43fc-a763-face032e2d3c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:50.418585 kubelet[2415]: I0906 00:44:50.416328 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5cbb2571-6e30-43fc-a763-face032e2d3c" (UID: "5cbb2571-6e30-43fc-a763-face032e2d3c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:44:50.415982 systemd[1]: var-lib-kubelet-pods-5cbb2571\x2d6e30\x2d43fc\x2da763\x2dface032e2d3c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvj2tt.mount: Deactivated successfully. Sep 6 00:44:50.416135 systemd[1]: var-lib-kubelet-pods-5cbb2571\x2d6e30\x2d43fc\x2da763\x2dface032e2d3c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:44:50.419479 kubelet[2415]: I0906 00:44:50.419449 2415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cbb2571-6e30-43fc-a763-face032e2d3c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5cbb2571-6e30-43fc-a763-face032e2d3c" (UID: "5cbb2571-6e30-43fc-a763-face032e2d3c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:44:50.448212 sshd[4284]: Accepted publickey for core from 10.200.16.10 port 37844 ssh2: RSA SHA256:zqYQlX7qSkE+/NL+x/mSmm1i/eG+8owV57+kdh1Gc1Y Sep 6 00:44:50.450139 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:44:50.460356 systemd-logind[1421]: New session 26 of user core. Sep 6 00:44:50.461729 systemd[1]: Started session-26.scope. Sep 6 00:44:50.496521 systemd[1]: var-lib-kubelet-pods-5cbb2571\x2d6e30\x2d43fc\x2da763\x2dface032e2d3c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:44:50.499003 kubelet[2415]: I0906 00:44:50.498962 2415 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj2tt\" (UniqueName: \"kubernetes.io/projected/5cbb2571-6e30-43fc-a763-face032e2d3c-kube-api-access-vj2tt\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:50.499189 kubelet[2415]: I0906 00:44:50.499177 2415 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-hostproc\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:50.499291 kubelet[2415]: I0906 00:44:50.499276 2415 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cbb2571-6e30-43fc-a763-face032e2d3c-hubble-tls\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:50.499377 kubelet[2415]: I0906 00:44:50.499367 2415 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-xtables-lock\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:50.499466 kubelet[2415]: I0906 00:44:50.499455 2415 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5cbb2571-6e30-43fc-a763-face032e2d3c-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:50.499532 kubelet[2415]: I0906 00:44:50.499523 2415 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:50.499596 kubelet[2415]: I0906 00:44:50.499588 2415 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-bpf-maps\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:50.499668 kubelet[2415]: I0906 00:44:50.499660 2415 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cbb2571-6e30-43fc-a763-face032e2d3c-clustermesh-secrets\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:50.499730 kubelet[2415]: I0906 00:44:50.499719 2415 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-cni-path\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:50.499785 kubelet[2415]: I0906 00:44:50.499773 2415 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-etc-cni-netd\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:50.499859 kubelet[2415]: I0906 00:44:50.499851 2415 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cbb2571-6e30-43fc-a763-face032e2d3c-cilium-config-path\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:50.499931 kubelet[2415]: I0906 00:44:50.499922 2415 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-cilium-cgroup\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:50.499995 kubelet[2415]: I0906 00:44:50.499987 2415 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-host-proc-sys-net\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:50.500062 kubelet[2415]: I0906 00:44:50.500046 2415 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-cilium-run\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:50.500117 kubelet[2415]: I0906 00:44:50.500110 2415 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cbb2571-6e30-43fc-a763-face032e2d3c-lib-modules\") on node \"ci-3510.3.8-n-cde0707216\" DevicePath \"\"" Sep 6 00:44:51.041555 kubelet[2415]: E0906 00:44:51.041491 2415 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:44:51.206151 kubelet[2415]: I0906 00:44:51.206111 2415 scope.go:117] "RemoveContainer" containerID="0bff838889b65e8ca4c98e36cf3ce8bfe0fd460cfcaa6d2cdc561935cf43aefc" Sep 6 00:44:51.207346 env[1434]: time="2025-09-06T00:44:51.207290708Z" level=info msg="RemoveContainer for \"0bff838889b65e8ca4c98e36cf3ce8bfe0fd460cfcaa6d2cdc561935cf43aefc\"" Sep 6 00:44:51.212682 systemd[1]: Removed slice kubepods-burstable-pod5cbb2571_6e30_43fc_a763_face032e2d3c.slice. Sep 6 00:44:51.218864 env[1434]: time="2025-09-06T00:44:51.218805655Z" level=info msg="RemoveContainer for \"0bff838889b65e8ca4c98e36cf3ce8bfe0fd460cfcaa6d2cdc561935cf43aefc\" returns successfully" Sep 6 00:44:51.266696 kubelet[2415]: E0906 00:44:51.266648 2415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5cbb2571-6e30-43fc-a763-face032e2d3c" containerName="mount-cgroup" Sep 6 00:44:51.266696 kubelet[2415]: E0906 00:44:51.266683 2415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5cbb2571-6e30-43fc-a763-face032e2d3c" containerName="mount-cgroup" Sep 6 00:44:51.266696 kubelet[2415]: I0906 00:44:51.266721 2415 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cbb2571-6e30-43fc-a763-face032e2d3c" containerName="mount-cgroup" Sep 6 00:44:51.267071 kubelet[2415]: I0906 00:44:51.266731 2415 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cbb2571-6e30-43fc-a763-face032e2d3c" containerName="mount-cgroup" Sep 6 00:44:51.274949 systemd[1]: Created slice kubepods-burstable-pod1ded7644_2258_4e96_a8b4_4bb8ae8f5950.slice. Sep 6 00:44:51.406044 kubelet[2415]: I0906 00:44:51.405882 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ded7644-2258-4e96-a8b4-4bb8ae8f5950-bpf-maps\") pod \"cilium-qdqnv\" (UID: \"1ded7644-2258-4e96-a8b4-4bb8ae8f5950\") " pod="kube-system/cilium-qdqnv" Sep 6 00:44:51.406044 kubelet[2415]: I0906 00:44:51.405936 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ded7644-2258-4e96-a8b4-4bb8ae8f5950-cilium-cgroup\") pod \"cilium-qdqnv\" (UID: \"1ded7644-2258-4e96-a8b4-4bb8ae8f5950\") " pod="kube-system/cilium-qdqnv" Sep 6 00:44:51.406044 kubelet[2415]: I0906 00:44:51.405965 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ded7644-2258-4e96-a8b4-4bb8ae8f5950-cni-path\") pod \"cilium-qdqnv\" (UID: \"1ded7644-2258-4e96-a8b4-4bb8ae8f5950\") " pod="kube-system/cilium-qdqnv" Sep 6 00:44:51.406044 kubelet[2415]: I0906 00:44:51.405990 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7tvk\" (UniqueName: \"kubernetes.io/projected/1ded7644-2258-4e96-a8b4-4bb8ae8f5950-kube-api-access-s7tvk\") pod \"cilium-qdqnv\" (UID: \"1ded7644-2258-4e96-a8b4-4bb8ae8f5950\") " pod="kube-system/cilium-qdqnv" Sep 6 00:44:51.406044 kubelet[2415]: I0906 00:44:51.406018 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ded7644-2258-4e96-a8b4-4bb8ae8f5950-hostproc\") pod \"cilium-qdqnv\" (UID: \"1ded7644-2258-4e96-a8b4-4bb8ae8f5950\") " pod="kube-system/cilium-qdqnv" Sep 6 00:44:51.406044 kubelet[2415]: I0906 00:44:51.406040 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ded7644-2258-4e96-a8b4-4bb8ae8f5950-hubble-tls\") pod \"cilium-qdqnv\" (UID: \"1ded7644-2258-4e96-a8b4-4bb8ae8f5950\") " pod="kube-system/cilium-qdqnv" Sep 6 00:44:51.406832 kubelet[2415]: I0906 00:44:51.406063 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ded7644-2258-4e96-a8b4-4bb8ae8f5950-etc-cni-netd\") pod \"cilium-qdqnv\" (UID: \"1ded7644-2258-4e96-a8b4-4bb8ae8f5950\") " pod="kube-system/cilium-qdqnv" Sep 6 00:44:51.406832 kubelet[2415]: I0906 00:44:51.406083 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ded7644-2258-4e96-a8b4-4bb8ae8f5950-cilium-run\") pod \"cilium-qdqnv\" (UID: \"1ded7644-2258-4e96-a8b4-4bb8ae8f5950\") " pod="kube-system/cilium-qdqnv" Sep 6 00:44:51.406832 kubelet[2415]: I0906 00:44:51.406104 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ded7644-2258-4e96-a8b4-4bb8ae8f5950-host-proc-sys-kernel\") pod \"cilium-qdqnv\" (UID: \"1ded7644-2258-4e96-a8b4-4bb8ae8f5950\") " pod="kube-system/cilium-qdqnv" Sep 6 00:44:51.406832 kubelet[2415]: I0906 00:44:51.406126 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ded7644-2258-4e96-a8b4-4bb8ae8f5950-host-proc-sys-net\") pod \"cilium-qdqnv\" (UID: \"1ded7644-2258-4e96-a8b4-4bb8ae8f5950\") " pod="kube-system/cilium-qdqnv" Sep 6 00:44:51.406832 kubelet[2415]: I0906 00:44:51.406156 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ded7644-2258-4e96-a8b4-4bb8ae8f5950-cilium-config-path\") pod \"cilium-qdqnv\" (UID: \"1ded7644-2258-4e96-a8b4-4bb8ae8f5950\") " pod="kube-system/cilium-qdqnv" Sep 6 00:44:51.406832 kubelet[2415]: I0906 00:44:51.406179 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ded7644-2258-4e96-a8b4-4bb8ae8f5950-clustermesh-secrets\") pod \"cilium-qdqnv\" (UID: \"1ded7644-2258-4e96-a8b4-4bb8ae8f5950\") " pod="kube-system/cilium-qdqnv" Sep 6 00:44:51.406832 kubelet[2415]: I0906 00:44:51.406203 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ded7644-2258-4e96-a8b4-4bb8ae8f5950-lib-modules\") pod \"cilium-qdqnv\" (UID: \"1ded7644-2258-4e96-a8b4-4bb8ae8f5950\") " pod="kube-system/cilium-qdqnv" Sep 6 00:44:51.406832 kubelet[2415]: I0906 00:44:51.406229 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1ded7644-2258-4e96-a8b4-4bb8ae8f5950-cilium-ipsec-secrets\") pod \"cilium-qdqnv\" (UID: \"1ded7644-2258-4e96-a8b4-4bb8ae8f5950\") " pod="kube-system/cilium-qdqnv" Sep 6 00:44:51.406832 kubelet[2415]: I0906 00:44:51.406255 2415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ded7644-2258-4e96-a8b4-4bb8ae8f5950-xtables-lock\") pod \"cilium-qdqnv\" (UID: \"1ded7644-2258-4e96-a8b4-4bb8ae8f5950\") " pod="kube-system/cilium-qdqnv" Sep 6 00:44:51.578649 env[1434]: time="2025-09-06T00:44:51.578587620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qdqnv,Uid:1ded7644-2258-4e96-a8b4-4bb8ae8f5950,Namespace:kube-system,Attempt:0,}" Sep 6 00:44:51.624877 env[1434]: time="2025-09-06T00:44:51.624772108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:44:51.625185 env[1434]: time="2025-09-06T00:44:51.625128909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:44:51.625347 env[1434]: time="2025-09-06T00:44:51.625179710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:44:51.625744 env[1434]: time="2025-09-06T00:44:51.625679212Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1000a9fbed50761b5446eb377f7df4f5865ce2e0ab25a52ff9d9d92caaab07e5 pid=4339 runtime=io.containerd.runc.v2 Sep 6 00:44:51.643591 systemd[1]: Started cri-containerd-1000a9fbed50761b5446eb377f7df4f5865ce2e0ab25a52ff9d9d92caaab07e5.scope. Sep 6 00:44:51.656577 kubelet[2415]: I0906 00:44:51.656442 2415 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cbb2571-6e30-43fc-a763-face032e2d3c" path="/var/lib/kubelet/pods/5cbb2571-6e30-43fc-a763-face032e2d3c/volumes" Sep 6 00:44:51.677652 env[1434]: time="2025-09-06T00:44:51.677600223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qdqnv,Uid:1ded7644-2258-4e96-a8b4-4bb8ae8f5950,Namespace:kube-system,Attempt:0,} returns sandbox id \"1000a9fbed50761b5446eb377f7df4f5865ce2e0ab25a52ff9d9d92caaab07e5\"" Sep 6 00:44:51.682565 env[1434]: time="2025-09-06T00:44:51.682520043Z" level=info msg="CreateContainer within sandbox \"1000a9fbed50761b5446eb377f7df4f5865ce2e0ab25a52ff9d9d92caaab07e5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:44:51.722955 env[1434]: time="2025-09-06T00:44:51.722884207Z" level=info msg="CreateContainer within sandbox \"1000a9fbed50761b5446eb377f7df4f5865ce2e0ab25a52ff9d9d92caaab07e5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e19223b3a9f865454557d8c470c2d5035844b021259c944281af43bb2bfe908d\"" Sep 6 00:44:51.724020 env[1434]: time="2025-09-06T00:44:51.723629510Z" level=info msg="StartContainer for \"e19223b3a9f865454557d8c470c2d5035844b021259c944281af43bb2bfe908d\"" Sep 6 00:44:51.746628 systemd[1]: Started cri-containerd-e19223b3a9f865454557d8c470c2d5035844b021259c944281af43bb2bfe908d.scope. Sep 6 00:44:51.786854 env[1434]: time="2025-09-06T00:44:51.784799060Z" level=info msg="StartContainer for \"e19223b3a9f865454557d8c470c2d5035844b021259c944281af43bb2bfe908d\" returns successfully" Sep 6 00:44:51.797404 systemd[1]: cri-containerd-e19223b3a9f865454557d8c470c2d5035844b021259c944281af43bb2bfe908d.scope: Deactivated successfully. Sep 6 00:44:51.858351 env[1434]: time="2025-09-06T00:44:51.858255659Z" level=info msg="shim disconnected" id=e19223b3a9f865454557d8c470c2d5035844b021259c944281af43bb2bfe908d Sep 6 00:44:51.858351 env[1434]: time="2025-09-06T00:44:51.858337559Z" level=warning msg="cleaning up after shim disconnected" id=e19223b3a9f865454557d8c470c2d5035844b021259c944281af43bb2bfe908d namespace=k8s.io Sep 6 00:44:51.858351 env[1434]: time="2025-09-06T00:44:51.858352159Z" level=info msg="cleaning up dead shim" Sep 6 00:44:51.869914 env[1434]: time="2025-09-06T00:44:51.869847506Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:44:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4427 runtime=io.containerd.runc.v2\n" Sep 6 00:44:52.008878 kubelet[2415]: W0906 00:44:52.008812 2415 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cbb2571_6e30_43fc_a763_face032e2d3c.slice/cri-containerd-dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16.scope WatchSource:0}: container "dfa3aa9fac0814a49fad84e4fd8501461e1bde3eaa4436645e67745a02935b16" in namespace "k8s.io": not found Sep 6 00:44:52.214347 env[1434]: time="2025-09-06T00:44:52.214283327Z" level=info msg="CreateContainer within sandbox \"1000a9fbed50761b5446eb377f7df4f5865ce2e0ab25a52ff9d9d92caaab07e5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:44:52.251059 env[1434]: time="2025-09-06T00:44:52.250989980Z" level=info msg="CreateContainer within sandbox \"1000a9fbed50761b5446eb377f7df4f5865ce2e0ab25a52ff9d9d92caaab07e5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3b51910e067595ce353daf1daa6dd5640910368d73168e9a9ee803820c443453\"" Sep 6 00:44:52.252092 env[1434]: time="2025-09-06T00:44:52.252046384Z" level=info msg="StartContainer for \"3b51910e067595ce353daf1daa6dd5640910368d73168e9a9ee803820c443453\"" Sep 6 00:44:52.274766 systemd[1]: Started cri-containerd-3b51910e067595ce353daf1daa6dd5640910368d73168e9a9ee803820c443453.scope. Sep 6 00:44:52.318618 env[1434]: time="2025-09-06T00:44:52.318557861Z" level=info msg="StartContainer for \"3b51910e067595ce353daf1daa6dd5640910368d73168e9a9ee803820c443453\" returns successfully" Sep 6 00:44:52.325980 systemd[1]: cri-containerd-3b51910e067595ce353daf1daa6dd5640910368d73168e9a9ee803820c443453.scope: Deactivated successfully. Sep 6 00:44:52.362457 env[1434]: time="2025-09-06T00:44:52.362378143Z" level=info msg="shim disconnected" id=3b51910e067595ce353daf1daa6dd5640910368d73168e9a9ee803820c443453 Sep 6 00:44:52.362457 env[1434]: time="2025-09-06T00:44:52.362446943Z" level=warning msg="cleaning up after shim disconnected" id=3b51910e067595ce353daf1daa6dd5640910368d73168e9a9ee803820c443453 namespace=k8s.io Sep 6 00:44:52.362457 env[1434]: time="2025-09-06T00:44:52.362460543Z" level=info msg="cleaning up dead shim" Sep 6 00:44:52.372355 env[1434]: time="2025-09-06T00:44:52.372285284Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:44:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4491 runtime=io.containerd.runc.v2\n" Sep 6 00:44:53.223959 env[1434]: time="2025-09-06T00:44:53.223877646Z" level=info msg="CreateContainer within sandbox \"1000a9fbed50761b5446eb377f7df4f5865ce2e0ab25a52ff9d9d92caaab07e5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:44:53.275664 env[1434]: time="2025-09-06T00:44:53.275586865Z" level=info msg="CreateContainer within sandbox \"1000a9fbed50761b5446eb377f7df4f5865ce2e0ab25a52ff9d9d92caaab07e5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b722b73873c01c19118838a5e843e4792dfe6c285b5b4f33e900c443ceb74302\"" Sep 6 00:44:53.276439 env[1434]: time="2025-09-06T00:44:53.276389669Z" level=info msg="StartContainer for \"b722b73873c01c19118838a5e843e4792dfe6c285b5b4f33e900c443ceb74302\"" Sep 6 00:44:53.309009 systemd[1]: Started cri-containerd-b722b73873c01c19118838a5e843e4792dfe6c285b5b4f33e900c443ceb74302.scope. Sep 6 00:44:53.348769 systemd[1]: cri-containerd-b722b73873c01c19118838a5e843e4792dfe6c285b5b4f33e900c443ceb74302.scope: Deactivated successfully. Sep 6 00:44:53.354079 env[1434]: time="2025-09-06T00:44:53.354025098Z" level=info msg="StartContainer for \"b722b73873c01c19118838a5e843e4792dfe6c285b5b4f33e900c443ceb74302\" returns successfully" Sep 6 00:44:53.392421 env[1434]: time="2025-09-06T00:44:53.392334061Z" level=info msg="shim disconnected" id=b722b73873c01c19118838a5e843e4792dfe6c285b5b4f33e900c443ceb74302 Sep 6 00:44:53.392421 env[1434]: time="2025-09-06T00:44:53.392409461Z" level=warning msg="cleaning up after shim disconnected" id=b722b73873c01c19118838a5e843e4792dfe6c285b5b4f33e900c443ceb74302 namespace=k8s.io Sep 6 00:44:53.392421 env[1434]: time="2025-09-06T00:44:53.392424361Z" level=info msg="cleaning up dead shim" Sep 6 00:44:53.402478 env[1434]: time="2025-09-06T00:44:53.402415604Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:44:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4552 runtime=io.containerd.runc.v2\n" Sep 6 00:44:53.519611 systemd[1]: run-containerd-runc-k8s.io-b722b73873c01c19118838a5e843e4792dfe6c285b5b4f33e900c443ceb74302-runc.SkInDo.mount: Deactivated successfully. Sep 6 00:44:53.519797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b722b73873c01c19118838a5e843e4792dfe6c285b5b4f33e900c443ceb74302-rootfs.mount: Deactivated successfully. Sep 6 00:44:54.227562 env[1434]: time="2025-09-06T00:44:54.227503126Z" level=info msg="CreateContainer within sandbox \"1000a9fbed50761b5446eb377f7df4f5865ce2e0ab25a52ff9d9d92caaab07e5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:44:54.262346 env[1434]: time="2025-09-06T00:44:54.262283477Z" level=info msg="CreateContainer within sandbox \"1000a9fbed50761b5446eb377f7df4f5865ce2e0ab25a52ff9d9d92caaab07e5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f51e8e8422975b409e6d5ceef1cc1d6045c47c1a243b6a6216653bdc6d04a6f0\"" Sep 6 00:44:54.263185 env[1434]: time="2025-09-06T00:44:54.263140781Z" level=info msg="StartContainer for \"f51e8e8422975b409e6d5ceef1cc1d6045c47c1a243b6a6216653bdc6d04a6f0\"" Sep 6 00:44:54.296363 systemd[1]: Started cri-containerd-f51e8e8422975b409e6d5ceef1cc1d6045c47c1a243b6a6216653bdc6d04a6f0.scope. Sep 6 00:44:54.329300 systemd[1]: cri-containerd-f51e8e8422975b409e6d5ceef1cc1d6045c47c1a243b6a6216653bdc6d04a6f0.scope: Deactivated successfully. Sep 6 00:44:54.335862 env[1434]: time="2025-09-06T00:44:54.335753295Z" level=info msg="StartContainer for \"f51e8e8422975b409e6d5ceef1cc1d6045c47c1a243b6a6216653bdc6d04a6f0\" returns successfully" Sep 6 00:44:54.370393 env[1434]: time="2025-09-06T00:44:54.370329845Z" level=info msg="shim disconnected" id=f51e8e8422975b409e6d5ceef1cc1d6045c47c1a243b6a6216653bdc6d04a6f0 Sep 6 00:44:54.370393 env[1434]: time="2025-09-06T00:44:54.370388045Z" level=warning msg="cleaning up after shim disconnected" id=f51e8e8422975b409e6d5ceef1cc1d6045c47c1a243b6a6216653bdc6d04a6f0 namespace=k8s.io Sep 6 00:44:54.370393 env[1434]: time="2025-09-06T00:44:54.370400645Z" level=info msg="cleaning up dead shim" Sep 6 00:44:54.380721 env[1434]: time="2025-09-06T00:44:54.380655390Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:44:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4608 runtime=io.containerd.runc.v2\n" Sep 6 00:44:54.519940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f51e8e8422975b409e6d5ceef1cc1d6045c47c1a243b6a6216653bdc6d04a6f0-rootfs.mount: Deactivated successfully. Sep 6 00:44:55.124198 kubelet[2415]: W0906 00:44:55.124128 2415 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ded7644_2258_4e96_a8b4_4bb8ae8f5950.slice/cri-containerd-e19223b3a9f865454557d8c470c2d5035844b021259c944281af43bb2bfe908d.scope WatchSource:0}: task e19223b3a9f865454557d8c470c2d5035844b021259c944281af43bb2bfe908d not found: not found Sep 6 00:44:55.235309 env[1434]: time="2025-09-06T00:44:55.235074410Z" level=info msg="CreateContainer within sandbox \"1000a9fbed50761b5446eb377f7df4f5865ce2e0ab25a52ff9d9d92caaab07e5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:44:55.269755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2001271496.mount: Deactivated successfully. Sep 6 00:44:55.279084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1116321080.mount: Deactivated successfully. Sep 6 00:44:55.287918 env[1434]: time="2025-09-06T00:44:55.287868543Z" level=info msg="CreateContainer within sandbox \"1000a9fbed50761b5446eb377f7df4f5865ce2e0ab25a52ff9d9d92caaab07e5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"929658aea8cbf343ad1b95a22b233ec38cf4422de3fb42452306ac41b1ac023d\"" Sep 6 00:44:55.290293 env[1434]: time="2025-09-06T00:44:55.288752047Z" level=info msg="StartContainer for \"929658aea8cbf343ad1b95a22b233ec38cf4422de3fb42452306ac41b1ac023d\"" Sep 6 00:44:55.309050 systemd[1]: Started cri-containerd-929658aea8cbf343ad1b95a22b233ec38cf4422de3fb42452306ac41b1ac023d.scope. Sep 6 00:44:55.354066 env[1434]: time="2025-09-06T00:44:55.354000835Z" level=info msg="StartContainer for \"929658aea8cbf343ad1b95a22b233ec38cf4422de3fb42452306ac41b1ac023d\" returns successfully" Sep 6 00:44:55.853853 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 6 00:44:56.981430 systemd[1]: run-containerd-runc-k8s.io-929658aea8cbf343ad1b95a22b233ec38cf4422de3fb42452306ac41b1ac023d-runc.tnlJVW.mount: Deactivated successfully. Sep 6 00:44:58.234038 kubelet[2415]: W0906 00:44:58.233975 2415 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ded7644_2258_4e96_a8b4_4bb8ae8f5950.slice/cri-containerd-3b51910e067595ce353daf1daa6dd5640910368d73168e9a9ee803820c443453.scope WatchSource:0}: task 3b51910e067595ce353daf1daa6dd5640910368d73168e9a9ee803820c443453 not found: not found Sep 6 00:44:58.577921 systemd-networkd[1585]: lxc_health: Link UP Sep 6 00:44:58.587540 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:44:58.586940 systemd-networkd[1585]: lxc_health: Gained carrier Sep 6 00:44:59.630175 kubelet[2415]: I0906 00:44:59.630087 2415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qdqnv" podStartSLOduration=8.630058702 podStartE2EDuration="8.630058702s" podCreationTimestamp="2025-09-06 00:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:44:56.25867305 +0000 UTC m=+200.739309239" watchObservedRunningTime="2025-09-06 00:44:59.630058702 +0000 UTC m=+204.110694891" Sep 6 00:45:00.326190 systemd-networkd[1585]: lxc_health: Gained IPv6LL Sep 6 00:45:01.354863 kubelet[2415]: W0906 00:45:01.354531 2415 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ded7644_2258_4e96_a8b4_4bb8ae8f5950.slice/cri-containerd-b722b73873c01c19118838a5e843e4792dfe6c285b5b4f33e900c443ceb74302.scope WatchSource:0}: task b722b73873c01c19118838a5e843e4792dfe6c285b5b4f33e900c443ceb74302 not found: not found Sep 6 00:45:03.602229 systemd[1]: run-containerd-runc-k8s.io-929658aea8cbf343ad1b95a22b233ec38cf4422de3fb42452306ac41b1ac023d-runc.N956u9.mount: Deactivated successfully. Sep 6 00:45:04.475118 kubelet[2415]: W0906 00:45:04.475046 2415 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ded7644_2258_4e96_a8b4_4bb8ae8f5950.slice/cri-containerd-f51e8e8422975b409e6d5ceef1cc1d6045c47c1a243b6a6216653bdc6d04a6f0.scope WatchSource:0}: task f51e8e8422975b409e6d5ceef1cc1d6045c47c1a243b6a6216653bdc6d04a6f0 not found: not found Sep 6 00:45:05.964123 sshd[4284]: pam_unix(sshd:session): session closed for user core Sep 6 00:45:05.967671 systemd[1]: sshd@23-10.200.8.17:22-10.200.16.10:37844.service: Deactivated successfully. Sep 6 00:45:05.968803 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 00:45:05.969574 systemd-logind[1421]: Session 26 logged out. Waiting for processes to exit. Sep 6 00:45:05.970582 systemd-logind[1421]: Removed session 26.