Nov 1 00:39:08.029736 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 00:39:08.029768 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:39:08.029783 kernel: BIOS-provided physical RAM map: Nov 1 00:39:08.029794 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 00:39:08.029805 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 1 00:39:08.029816 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 1 00:39:08.029845 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Nov 1 00:39:08.029858 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 1 00:39:08.029869 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 1 00:39:08.029879 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 1 00:39:08.029891 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 1 00:39:08.029901 kernel: printk: bootconsole [earlyser0] enabled Nov 1 00:39:08.029912 kernel: NX (Execute Disable) protection: active Nov 1 00:39:08.029924 kernel: efi: EFI v2.70 by Microsoft Nov 1 00:39:08.029941 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Nov 1 00:39:08.029954 kernel: random: crng init done Nov 1 00:39:08.029966 kernel: SMBIOS 3.1.0 present. Nov 1 00:39:08.029979 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Nov 1 00:39:08.029991 kernel: Hypervisor detected: Microsoft Hyper-V Nov 1 00:39:08.030002 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 1 00:39:08.030014 kernel: Hyper-V Host Build:20348-10.0-1-0.1827 Nov 1 00:39:08.030026 kernel: Hyper-V: Nested features: 0x1e0101 Nov 1 00:39:08.030040 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 1 00:39:08.030052 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 1 00:39:08.030064 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 1 00:39:08.030075 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 1 00:39:08.030088 kernel: tsc: Detected 2593.905 MHz processor Nov 1 00:39:08.030100 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:39:08.030113 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:39:08.030124 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 1 00:39:08.030136 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:39:08.030148 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 1 00:39:08.030162 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 1 00:39:08.030174 kernel: Using GB pages for direct mapping Nov 1 00:39:08.030185 kernel: Secure boot disabled Nov 1 00:39:08.030197 kernel: ACPI: Early table checksum verification disabled Nov 1 00:39:08.030208 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 1 00:39:08.030220 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:39:08.030233 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:39:08.030245 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Nov 1 00:39:08.030265 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 1 00:39:08.030278 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:39:08.030290 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:39:08.030303 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:39:08.030316 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:39:08.030329 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:39:08.030345 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:39:08.030358 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:39:08.030370 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 1 00:39:08.030383 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Nov 1 00:39:08.030396 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 1 00:39:08.030409 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 1 00:39:08.030422 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 1 00:39:08.030434 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 1 00:39:08.030449 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 1 00:39:08.030461 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 1 00:39:08.030474 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 1 00:39:08.030487 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Nov 1 00:39:08.030499 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:39:08.030512 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:39:08.030524 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 1 00:39:08.030537 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 1 00:39:08.030550 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 1 00:39:08.030565 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 1 00:39:08.030578 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 1 00:39:08.030590 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 1 00:39:08.030603 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 1 00:39:08.030616 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 1 00:39:08.030628 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 1 00:39:08.030641 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 1 00:39:08.030654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 1 00:39:08.030667 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 1 00:39:08.030681 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 1 00:39:08.030694 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 1 00:39:08.030707 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 1 00:39:08.030719 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 1 00:39:08.030732 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 1 00:39:08.030745 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 1 00:39:08.030757 kernel: Zone ranges: Nov 1 00:39:08.030770 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:39:08.030783 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 00:39:08.030798 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 1 00:39:08.030811 kernel: Movable zone start for each node Nov 1 00:39:08.030833 kernel: Early memory node ranges Nov 1 00:39:08.030847 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 1 00:39:08.030859 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 1 00:39:08.030872 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 1 00:39:08.030884 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 1 00:39:08.030897 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 1 00:39:08.030910 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:39:08.030925 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 1 00:39:08.030937 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Nov 1 00:39:08.030950 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 1 00:39:08.030962 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 1 00:39:08.030975 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:39:08.030988 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:39:08.031000 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:39:08.031013 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 1 00:39:08.031025 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:39:08.031040 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 1 00:39:08.031052 kernel: Booting paravirtualized kernel on Hyper-V Nov 1 00:39:08.031065 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:39:08.031079 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:39:08.031091 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Nov 1 00:39:08.031104 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Nov 1 00:39:08.031116 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:39:08.031128 kernel: Hyper-V: PV spinlocks enabled Nov 1 00:39:08.031141 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:39:08.031157 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Nov 1 00:39:08.031169 kernel: Policy zone: Normal Nov 1 00:39:08.031184 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:39:08.031198 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:39:08.031211 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 1 00:39:08.031224 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:39:08.031237 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:39:08.031250 kernel: Memory: 8079144K/8387460K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 308056K reserved, 0K cma-reserved) Nov 1 00:39:08.031265 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:39:08.031278 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 00:39:08.031300 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 00:39:08.031316 kernel: rcu: Hierarchical RCU implementation. Nov 1 00:39:08.031329 kernel: rcu: RCU event tracing is enabled. Nov 1 00:39:08.031343 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:39:08.031356 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:39:08.031370 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:39:08.031383 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:39:08.031397 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:39:08.031410 kernel: Using NULL legacy PIC Nov 1 00:39:08.031426 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 1 00:39:08.031439 kernel: Console: colour dummy device 80x25 Nov 1 00:39:08.031453 kernel: printk: console [tty1] enabled Nov 1 00:39:08.031466 kernel: printk: console [ttyS0] enabled Nov 1 00:39:08.031479 kernel: printk: bootconsole [earlyser0] disabled Nov 1 00:39:08.031495 kernel: ACPI: Core revision 20210730 Nov 1 00:39:08.031508 kernel: Failed to register legacy timer interrupt Nov 1 00:39:08.031522 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:39:08.031535 kernel: Hyper-V: Using IPI hypercalls Nov 1 00:39:08.031549 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Nov 1 00:39:08.031562 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 00:39:08.031576 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 00:39:08.031590 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:39:08.031603 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:39:08.031616 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:39:08.031632 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 1 00:39:08.031646 kernel: RETBleed: Vulnerable Nov 1 00:39:08.031659 kernel: Speculative Store Bypass: Vulnerable Nov 1 00:39:08.031672 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:39:08.031685 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:39:08.031698 kernel: active return thunk: its_return_thunk Nov 1 00:39:08.031711 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:39:08.031725 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:39:08.031738 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:39:08.031752 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:39:08.031767 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 1 00:39:08.031780 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 1 00:39:08.031794 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 1 00:39:08.031807 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:39:08.031820 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 1 00:39:08.031839 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 1 00:39:08.034008 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 1 00:39:08.034027 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 1 00:39:08.034041 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:39:08.034054 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:39:08.034068 kernel: LSM: Security Framework initializing Nov 1 00:39:08.034082 kernel: SELinux: Initializing. Nov 1 00:39:08.034100 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:39:08.034114 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:39:08.034128 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 1 00:39:08.034142 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 1 00:39:08.034155 kernel: signal: max sigframe size: 3632 Nov 1 00:39:08.034168 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:39:08.034182 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:39:08.034196 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:39:08.034210 kernel: x86: Booting SMP configuration: Nov 1 00:39:08.034223 kernel: .... node #0, CPUs: #1 Nov 1 00:39:08.034240 kernel: Transient Scheduler Attacks: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 1 00:39:08.034255 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 00:39:08.034269 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:39:08.034283 kernel: smpboot: Max logical packages: 1 Nov 1 00:39:08.034296 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Nov 1 00:39:08.034310 kernel: devtmpfs: initialized Nov 1 00:39:08.034324 kernel: x86/mm: Memory block size: 128MB Nov 1 00:39:08.034338 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 1 00:39:08.034355 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:39:08.034368 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:39:08.034381 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:39:08.034395 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:39:08.034408 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:39:08.034422 kernel: audit: type=2000 audit(1761957547.024:1): state=initialized audit_enabled=0 res=1 Nov 1 00:39:08.034436 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:39:08.034450 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:39:08.034463 kernel: cpuidle: using governor menu Nov 1 00:39:08.034480 kernel: ACPI: bus type PCI registered Nov 1 00:39:08.034493 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:39:08.034507 kernel: dca service started, version 1.12.1 Nov 1 00:39:08.034521 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:39:08.034535 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:39:08.034548 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:39:08.034562 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:39:08.034576 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:39:08.034589 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:39:08.034605 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:39:08.034618 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:39:08.034632 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:39:08.034646 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:39:08.034660 kernel: ACPI: Interpreter enabled Nov 1 00:39:08.034673 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:39:08.034686 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:39:08.034700 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:39:08.034714 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 1 00:39:08.034730 kernel: iommu: Default domain type: Translated Nov 1 00:39:08.034744 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:39:08.034758 kernel: vgaarb: loaded Nov 1 00:39:08.034771 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:39:08.034785 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:39:08.034800 kernel: PTP clock support registered Nov 1 00:39:08.034813 kernel: Registered efivars operations Nov 1 00:39:08.034836 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:39:08.034848 kernel: PCI: System does not support PCI Nov 1 00:39:08.034862 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 1 00:39:08.034873 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:39:08.034885 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:39:08.034898 kernel: pnp: PnP ACPI init Nov 1 00:39:08.034912 kernel: pnp: PnP ACPI: found 3 devices Nov 1 00:39:08.034926 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:39:08.034940 kernel: NET: Registered PF_INET protocol family Nov 1 00:39:08.034953 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:39:08.034967 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 1 00:39:08.034984 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:39:08.034997 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:39:08.035011 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Nov 1 00:39:08.035025 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 1 00:39:08.035038 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:39:08.035052 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:39:08.035065 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:39:08.035079 kernel: NET: Registered PF_XDP protocol family Nov 1 00:39:08.035093 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:39:08.035108 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:39:08.035122 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Nov 1 00:39:08.035136 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:39:08.035150 kernel: Initialise system trusted keyrings Nov 1 00:39:08.035164 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 1 00:39:08.035178 kernel: Key type asymmetric registered Nov 1 00:39:08.035191 kernel: Asymmetric key parser 'x509' registered Nov 1 00:39:08.035204 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:39:08.035218 kernel: io scheduler mq-deadline registered Nov 1 00:39:08.035234 kernel: io scheduler kyber registered Nov 1 00:39:08.035248 kernel: io scheduler bfq registered Nov 1 00:39:08.035261 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:39:08.035275 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:39:08.035289 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:39:08.035303 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 00:39:08.035317 kernel: i8042: PNP: No PS/2 controller found. Nov 1 00:39:08.040841 kernel: rtc_cmos 00:02: registered as rtc0 Nov 1 00:39:08.040981 kernel: rtc_cmos 00:02: setting system clock to 2025-11-01T00:39:07 UTC (1761957547) Nov 1 00:39:08.041094 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 1 00:39:08.041112 kernel: intel_pstate: CPU model not supported Nov 1 00:39:08.041126 kernel: efifb: probing for efifb Nov 1 00:39:08.041140 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 1 00:39:08.041154 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 1 00:39:08.041168 kernel: efifb: scrolling: redraw Nov 1 00:39:08.041181 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 1 00:39:08.041196 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 00:39:08.041213 kernel: fb0: EFI VGA frame buffer device Nov 1 00:39:08.041227 kernel: pstore: Registered efi as persistent store backend Nov 1 00:39:08.041241 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:39:08.041254 kernel: Segment Routing with IPv6 Nov 1 00:39:08.041268 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:39:08.041281 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:39:08.041295 kernel: Key type dns_resolver registered Nov 1 00:39:08.041309 kernel: IPI shorthand broadcast: enabled Nov 1 00:39:08.041322 kernel: sched_clock: Marking stable (798874800, 23594200)->(1028433800, -205964800) Nov 1 00:39:08.041338 kernel: registered taskstats version 1 Nov 1 00:39:08.041352 kernel: Loading compiled-in X.509 certificates Nov 1 00:39:08.041366 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 00:39:08.041379 kernel: Key type .fscrypt registered Nov 1 00:39:08.041392 kernel: Key type fscrypt-provisioning registered Nov 1 00:39:08.041406 kernel: pstore: Using crash dump compression: deflate Nov 1 00:39:08.041419 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:39:08.041433 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:39:08.041449 kernel: ima: No architecture policies found Nov 1 00:39:08.041462 kernel: clk: Disabling unused clocks Nov 1 00:39:08.041476 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 00:39:08.041489 kernel: Write protecting the kernel read-only data: 28672k Nov 1 00:39:08.041503 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 00:39:08.041516 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 00:39:08.041530 kernel: Run /init as init process Nov 1 00:39:08.041543 kernel: with arguments: Nov 1 00:39:08.041557 kernel: /init Nov 1 00:39:08.041570 kernel: with environment: Nov 1 00:39:08.041585 kernel: HOME=/ Nov 1 00:39:08.041598 kernel: TERM=linux Nov 1 00:39:08.041611 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:39:08.041628 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:39:08.041645 systemd[1]: Detected virtualization microsoft. Nov 1 00:39:08.041660 systemd[1]: Detected architecture x86-64. Nov 1 00:39:08.041673 systemd[1]: Running in initrd. Nov 1 00:39:08.041690 systemd[1]: No hostname configured, using default hostname. Nov 1 00:39:08.041704 systemd[1]: Hostname set to . Nov 1 00:39:08.041719 systemd[1]: Initializing machine ID from random generator. Nov 1 00:39:08.041733 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:39:08.041747 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:39:08.041761 systemd[1]: Reached target cryptsetup.target. Nov 1 00:39:08.041775 systemd[1]: Reached target paths.target. Nov 1 00:39:08.041788 systemd[1]: Reached target slices.target. Nov 1 00:39:08.041802 systemd[1]: Reached target swap.target. Nov 1 00:39:08.041818 systemd[1]: Reached target timers.target. Nov 1 00:39:08.041845 systemd[1]: Listening on iscsid.socket. Nov 1 00:39:08.041859 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:39:08.041873 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:39:08.041887 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:39:08.041902 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:39:08.041916 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:39:08.041933 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:39:08.041947 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:39:08.041961 systemd[1]: Reached target sockets.target. Nov 1 00:39:08.041975 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:39:08.041989 systemd[1]: Finished network-cleanup.service. Nov 1 00:39:08.042003 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:39:08.042017 systemd[1]: Starting systemd-journald.service... Nov 1 00:39:08.042032 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:39:08.042046 systemd[1]: Starting systemd-resolved.service... Nov 1 00:39:08.042063 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:39:08.042082 systemd-journald[183]: Journal started Nov 1 00:39:08.042148 systemd-journald[183]: Runtime Journal (/run/log/journal/4dee01333cb04c36b575f2db23f1d24e) is 8.0M, max 159.0M, 151.0M free. Nov 1 00:39:08.040835 systemd-modules-load[184]: Inserted module 'overlay' Nov 1 00:39:08.058842 systemd[1]: Started systemd-journald.service. Nov 1 00:39:08.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.076835 kernel: audit: type=1130 audit(1761957548.064:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.077222 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:39:08.082330 systemd-resolved[185]: Positive Trust Anchors: Nov 1 00:39:08.082532 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:39:08.099906 kernel: audit: type=1130 audit(1761957548.082:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.082582 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:39:08.085698 systemd-resolved[185]: Defaulting to hostname 'linux'. Nov 1 00:39:08.099257 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:39:08.121368 systemd[1]: Started systemd-resolved.service. Nov 1 00:39:08.169959 kernel: audit: type=1130 audit(1761957548.120:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.169995 kernel: audit: type=1130 audit(1761957548.134:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.170013 kernel: audit: type=1130 audit(1761957548.147:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.134942 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:39:08.147746 systemd[1]: Reached target nss-lookup.target. Nov 1 00:39:08.161679 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:39:08.164504 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:39:08.173716 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:39:08.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.196842 kernel: audit: type=1130 audit(1761957548.180:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.207157 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:39:08.227549 kernel: audit: type=1130 audit(1761957548.208:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.210476 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:39:08.238358 dracut-cmdline[200]: dracut-dracut-053 Nov 1 00:39:08.242852 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:39:08.258772 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:39:08.268725 systemd-modules-load[184]: Inserted module 'br_netfilter' Nov 1 00:39:08.271492 kernel: Bridge firewalling registered Nov 1 00:39:08.298845 kernel: SCSI subsystem initialized Nov 1 00:39:08.323352 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:39:08.323419 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:39:08.332174 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:39:08.332212 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:39:08.336193 systemd-modules-load[184]: Inserted module 'dm_multipath' Nov 1 00:39:08.337148 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:39:08.345129 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:39:08.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.364841 kernel: audit: type=1130 audit(1761957548.343:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.380289 kernel: iscsi: registered transport (tcp) Nov 1 00:39:08.380326 kernel: audit: type=1130 audit(1761957548.379:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.375633 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:39:08.419565 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:39:08.419611 kernel: QLogic iSCSI HBA Driver Nov 1 00:39:08.448962 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:39:08.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.452320 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:39:08.504842 kernel: raid6: avx512x4 gen() 18376 MB/s Nov 1 00:39:08.524839 kernel: raid6: avx512x4 xor() 8226 MB/s Nov 1 00:39:08.544837 kernel: raid6: avx512x2 gen() 18418 MB/s Nov 1 00:39:08.565838 kernel: raid6: avx512x2 xor() 29563 MB/s Nov 1 00:39:08.585833 kernel: raid6: avx512x1 gen() 18168 MB/s Nov 1 00:39:08.605834 kernel: raid6: avx512x1 xor() 26853 MB/s Nov 1 00:39:08.626836 kernel: raid6: avx2x4 gen() 18068 MB/s Nov 1 00:39:08.646835 kernel: raid6: avx2x4 xor() 7301 MB/s Nov 1 00:39:08.666832 kernel: raid6: avx2x2 gen() 17982 MB/s Nov 1 00:39:08.687838 kernel: raid6: avx2x2 xor() 22194 MB/s Nov 1 00:39:08.707832 kernel: raid6: avx2x1 gen() 14052 MB/s Nov 1 00:39:08.727834 kernel: raid6: avx2x1 xor() 19472 MB/s Nov 1 00:39:08.748842 kernel: raid6: sse2x4 gen() 11800 MB/s Nov 1 00:39:08.768836 kernel: raid6: sse2x4 xor() 7352 MB/s Nov 1 00:39:08.788836 kernel: raid6: sse2x2 gen() 12870 MB/s Nov 1 00:39:08.809836 kernel: raid6: sse2x2 xor() 7541 MB/s Nov 1 00:39:08.829833 kernel: raid6: sse2x1 gen() 11632 MB/s Nov 1 00:39:08.853680 kernel: raid6: sse2x1 xor() 5945 MB/s Nov 1 00:39:08.853712 kernel: raid6: using algorithm avx512x2 gen() 18418 MB/s Nov 1 00:39:08.853724 kernel: raid6: .... xor() 29563 MB/s, rmw enabled Nov 1 00:39:08.861533 kernel: raid6: using avx512x2 recovery algorithm Nov 1 00:39:08.877848 kernel: xor: automatically using best checksumming function avx Nov 1 00:39:08.974850 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 00:39:08.983055 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:39:08.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:08.987000 audit: BPF prog-id=7 op=LOAD Nov 1 00:39:08.987000 audit: BPF prog-id=8 op=LOAD Nov 1 00:39:08.988655 systemd[1]: Starting systemd-udevd.service... Nov 1 00:39:09.004055 systemd-udevd[384]: Using default interface naming scheme 'v252'. Nov 1 00:39:09.008853 systemd[1]: Started systemd-udevd.service. Nov 1 00:39:09.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:09.019242 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:39:09.034389 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Nov 1 00:39:09.065728 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:39:09.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:09.069094 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:39:09.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:09.107129 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:39:09.156848 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:39:09.169845 kernel: hv_vmbus: Vmbus version:5.2 Nov 1 00:39:09.185841 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:39:09.195856 kernel: AES CTR mode by8 optimization enabled Nov 1 00:39:09.201844 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 1 00:39:09.219851 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 1 00:39:09.237231 kernel: hv_vmbus: registering driver hv_netvsc Nov 1 00:39:09.237271 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 00:39:09.243713 kernel: hv_vmbus: registering driver hv_storvsc Nov 1 00:39:09.251701 kernel: scsi host1: storvsc_host_t Nov 1 00:39:09.251946 kernel: scsi host0: storvsc_host_t Nov 1 00:39:09.258849 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 1 00:39:09.258900 kernel: hv_vmbus: registering driver hid_hyperv Nov 1 00:39:09.268844 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 1 00:39:09.268899 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 1 00:39:09.284864 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 1 00:39:09.307134 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 1 00:39:09.311955 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:39:09.311977 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 1 00:39:09.336754 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 1 00:39:09.336938 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 1 00:39:09.337109 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 1 00:39:09.337292 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 1 00:39:09.337455 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 1 00:39:09.337620 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:39:09.337641 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 1 00:39:09.404856 kernel: hv_netvsc 7c1e5204-780c-7c1e-5204-780c7c1e5204 eth0: VF slot 1 added Nov 1 00:39:09.414250 kernel: hv_vmbus: registering driver hv_pci Nov 1 00:39:09.421511 kernel: hv_pci 756bc9d4-23db-44e5-b170-67a0b0a27926: PCI VMBus probing: Using version 0x10004 Nov 1 00:39:09.479687 kernel: hv_pci 756bc9d4-23db-44e5-b170-67a0b0a27926: PCI host bridge to bus 23db:00 Nov 1 00:39:09.479872 kernel: pci_bus 23db:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 1 00:39:09.480051 kernel: pci_bus 23db:00: No busn resource found for root bus, will use [bus 00-ff] Nov 1 00:39:09.480203 kernel: pci 23db:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 1 00:39:09.480378 kernel: pci 23db:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 1 00:39:09.480540 kernel: pci 23db:00:02.0: enabling Extended Tags Nov 1 00:39:09.480705 kernel: pci 23db:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 23db:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 1 00:39:09.480879 kernel: pci_bus 23db:00: busn_res: [bus 00-ff] end is updated to 00 Nov 1 00:39:09.481028 kernel: pci 23db:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 1 00:39:09.573368 kernel: mlx5_core 23db:00:02.0: enabling device (0000 -> 0002) Nov 1 00:39:09.834541 kernel: mlx5_core 23db:00:02.0: firmware version: 14.30.5006 Nov 1 00:39:09.834725 kernel: mlx5_core 23db:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Nov 1 00:39:09.834905 kernel: mlx5_core 23db:00:02.0: Supported tc offload range - chains: 1, prios: 1 Nov 1 00:39:09.835049 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (446) Nov 1 00:39:09.835061 kernel: mlx5_core 23db:00:02.0: mlx5e_tc_post_act_init:40:(pid 16): firmware level support is missing Nov 1 00:39:09.835161 kernel: hv_netvsc 7c1e5204-780c-7c1e-5204-780c7c1e5204 eth0: VF registering: eth1 Nov 1 00:39:09.835254 kernel: mlx5_core 23db:00:02.0 eth1: joined to eth0 Nov 1 00:39:09.767817 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:39:09.831457 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:39:09.847847 kernel: mlx5_core 23db:00:02.0 enP9179s1: renamed from eth1 Nov 1 00:39:09.956964 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:39:09.969286 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:39:09.976266 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:39:09.982441 systemd[1]: Starting disk-uuid.service... Nov 1 00:39:11.004850 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:39:11.005192 disk-uuid[564]: The operation has completed successfully. Nov 1 00:39:11.086540 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:39:11.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:11.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:11.086646 systemd[1]: Finished disk-uuid.service. Nov 1 00:39:11.094719 systemd[1]: Starting verity-setup.service... Nov 1 00:39:11.135841 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:39:11.430849 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:39:11.437190 systemd[1]: Finished verity-setup.service. Nov 1 00:39:11.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:11.442392 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:39:11.518043 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:39:11.518452 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:39:11.520632 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:39:11.521403 systemd[1]: Starting ignition-setup.service... Nov 1 00:39:11.525644 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:39:11.554056 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:39:11.554097 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:39:11.554110 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:39:11.601059 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:39:11.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:11.605000 audit: BPF prog-id=9 op=LOAD Nov 1 00:39:11.606363 systemd[1]: Starting systemd-networkd.service... Nov 1 00:39:11.631629 systemd-networkd[831]: lo: Link UP Nov 1 00:39:11.631640 systemd-networkd[831]: lo: Gained carrier Nov 1 00:39:11.635871 systemd-networkd[831]: Enumeration completed Nov 1 00:39:11.636967 systemd[1]: Started systemd-networkd.service. Nov 1 00:39:11.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:11.641413 systemd-networkd[831]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:39:11.646179 systemd[1]: Reached target network.target. Nov 1 00:39:11.654254 systemd[1]: Starting iscsiuio.service... Nov 1 00:39:11.663237 systemd[1]: Started iscsiuio.service. Nov 1 00:39:11.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:11.666078 systemd[1]: Starting iscsid.service... Nov 1 00:39:11.670954 iscsid[839]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:39:11.670954 iscsid[839]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Nov 1 00:39:11.670954 iscsid[839]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:39:11.670954 iscsid[839]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:39:11.670954 iscsid[839]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:39:11.670954 iscsid[839]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:39:11.670954 iscsid[839]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:39:11.712956 kernel: mlx5_core 23db:00:02.0 enP9179s1: Link up Nov 1 00:39:11.713191 kernel: buffer_size[0]=0 is not enough for lossless buffer Nov 1 00:39:11.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:11.672155 systemd[1]: Started iscsid.service. Nov 1 00:39:11.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:11.677372 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:39:11.713154 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:39:11.717255 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:39:11.721565 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:39:11.726005 systemd[1]: Reached target remote-fs.target. Nov 1 00:39:11.746710 kernel: hv_netvsc 7c1e5204-780c-7c1e-5204-780c7c1e5204 eth0: Data path switched to VF: enP9179s1 Nov 1 00:39:11.736184 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:39:11.757459 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:39:11.753144 systemd-networkd[831]: enP9179s1: Link UP Nov 1 00:39:11.753268 systemd-networkd[831]: eth0: Link UP Nov 1 00:39:11.753452 systemd-networkd[831]: eth0: Gained carrier Nov 1 00:39:11.755158 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:39:11.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:11.758279 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:39:11.759896 systemd-networkd[831]: enP9179s1: Gained carrier Nov 1 00:39:11.778900 systemd-networkd[831]: eth0: DHCPv4 address 10.200.4.33/24, gateway 10.200.4.1 acquired from 168.63.129.16 Nov 1 00:39:11.876255 systemd[1]: Finished ignition-setup.service. Nov 1 00:39:11.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:11.879624 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:39:13.612057 systemd-networkd[831]: eth0: Gained IPv6LL Nov 1 00:39:15.477764 ignition[858]: Ignition 2.14.0 Nov 1 00:39:15.477781 ignition[858]: Stage: fetch-offline Nov 1 00:39:15.477898 ignition[858]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:39:15.477952 ignition[858]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:39:15.618363 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:39:15.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:15.619864 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:39:15.645596 kernel: kauditd_printk_skb: 18 callbacks suppressed Nov 1 00:39:15.645619 kernel: audit: type=1130 audit(1761957555.623:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:15.618547 ignition[858]: parsed url from cmdline: "" Nov 1 00:39:15.625356 systemd[1]: Starting ignition-fetch.service... Nov 1 00:39:15.618551 ignition[858]: no config URL provided Nov 1 00:39:15.618557 ignition[858]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:39:15.618565 ignition[858]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:39:15.618571 ignition[858]: failed to fetch config: resource requires networking Nov 1 00:39:15.618939 ignition[858]: Ignition finished successfully Nov 1 00:39:15.634081 ignition[864]: Ignition 2.14.0 Nov 1 00:39:15.634088 ignition[864]: Stage: fetch Nov 1 00:39:15.634208 ignition[864]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:39:15.634243 ignition[864]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:39:15.666993 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:39:15.670128 ignition[864]: parsed url from cmdline: "" Nov 1 00:39:15.670137 ignition[864]: no config URL provided Nov 1 00:39:15.670146 ignition[864]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:39:15.670160 ignition[864]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:39:15.670203 ignition[864]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 1 00:39:15.778655 ignition[864]: GET result: OK Nov 1 00:39:15.778763 ignition[864]: config has been read from IMDS userdata Nov 1 00:39:15.778800 ignition[864]: parsing config with SHA512: 1ac278ddd6f91aac72fcd5554c8d71dbbb04cb2bd2458bab6fe9bc76168e7d54c9d9b3fb272e740218907c43df09d58486498291c936a32d834568b61023a450 Nov 1 00:39:15.783051 unknown[864]: fetched base config from "system" Nov 1 00:39:15.783609 ignition[864]: fetch: fetch complete Nov 1 00:39:15.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:15.783066 unknown[864]: fetched base config from "system" Nov 1 00:39:15.805373 kernel: audit: type=1130 audit(1761957555.787:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:15.783615 ignition[864]: fetch: fetch passed Nov 1 00:39:15.783073 unknown[864]: fetched user config from "azure" Nov 1 00:39:15.783654 ignition[864]: Ignition finished successfully Nov 1 00:39:15.785111 systemd[1]: Finished ignition-fetch.service. Nov 1 00:39:15.788915 systemd[1]: Starting ignition-kargs.service... Nov 1 00:39:15.813613 ignition[870]: Ignition 2.14.0 Nov 1 00:39:15.813619 ignition[870]: Stage: kargs Nov 1 00:39:15.813716 ignition[870]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:39:15.813740 ignition[870]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:39:15.825435 systemd[1]: Finished ignition-kargs.service. Nov 1 00:39:15.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:15.820171 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:39:15.844955 kernel: audit: type=1130 audit(1761957555.827:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:15.828593 systemd[1]: Starting ignition-disks.service... Nov 1 00:39:15.822048 ignition[870]: kargs: kargs passed Nov 1 00:39:15.822103 ignition[870]: Ignition finished successfully Nov 1 00:39:15.848634 ignition[876]: Ignition 2.14.0 Nov 1 00:39:15.848640 ignition[876]: Stage: disks Nov 1 00:39:15.848729 ignition[876]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:39:15.855046 systemd[1]: Finished ignition-disks.service. Nov 1 00:39:15.848748 ignition[876]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:39:15.851675 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:39:15.854347 ignition[876]: disks: disks passed Nov 1 00:39:15.854387 ignition[876]: Ignition finished successfully Nov 1 00:39:15.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:15.869125 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:39:15.887227 kernel: audit: type=1130 audit(1761957555.868:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:15.887233 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:39:15.891632 systemd[1]: Reached target local-fs.target. Nov 1 00:39:15.891706 systemd[1]: Reached target sysinit.target. Nov 1 00:39:15.892097 systemd[1]: Reached target basic.target. Nov 1 00:39:15.893258 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:39:15.963024 systemd-fsck[884]: ROOT: clean, 637/7326000 files, 481088/7359488 blocks Nov 1 00:39:15.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:15.967676 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:39:15.988089 kernel: audit: type=1130 audit(1761957555.969:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:15.971122 systemd[1]: Mounting sysroot.mount... Nov 1 00:39:16.000844 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:39:16.001520 systemd[1]: Mounted sysroot.mount. Nov 1 00:39:16.005950 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:39:16.048498 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:39:16.054637 systemd[1]: Starting flatcar-metadata-hostname.service... Nov 1 00:39:16.059806 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:39:16.059862 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:39:16.067694 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:39:16.125306 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:39:16.132212 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:39:16.148843 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (895) Nov 1 00:39:16.154084 initrd-setup-root[900]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:39:16.161703 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:39:16.161725 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:39:16.161739 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:39:16.179847 initrd-setup-root[924]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:39:16.207005 initrd-setup-root[932]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:39:16.232359 initrd-setup-root[940]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:39:16.350886 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:39:16.788572 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:39:16.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:16.795123 systemd[1]: Starting ignition-mount.service... Nov 1 00:39:16.812076 kernel: audit: type=1130 audit(1761957556.794:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:16.810176 systemd[1]: Starting sysroot-boot.service... Nov 1 00:39:16.818293 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Nov 1 00:39:16.818422 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Nov 1 00:39:16.839072 systemd[1]: Finished sysroot-boot.service. Nov 1 00:39:16.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:16.857128 kernel: audit: type=1130 audit(1761957556.843:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:17.080498 ignition[964]: INFO : Ignition 2.14.0 Nov 1 00:39:17.080498 ignition[964]: INFO : Stage: mount Nov 1 00:39:17.085078 ignition[964]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:39:17.085078 ignition[964]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:39:17.098726 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:39:17.102242 ignition[964]: INFO : mount: mount passed Nov 1 00:39:17.102242 ignition[964]: INFO : Ignition finished successfully Nov 1 00:39:17.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:17.104620 systemd[1]: Finished ignition-mount.service. Nov 1 00:39:17.124214 kernel: audit: type=1130 audit(1761957557.108:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:17.614891 coreos-metadata[894]: Nov 01 00:39:17.614 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 1 00:39:17.638188 coreos-metadata[894]: Nov 01 00:39:17.638 INFO Fetch successful Nov 1 00:39:17.676603 coreos-metadata[894]: Nov 01 00:39:17.676 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 1 00:39:17.694014 coreos-metadata[894]: Nov 01 00:39:17.693 INFO Fetch successful Nov 1 00:39:17.716873 coreos-metadata[894]: Nov 01 00:39:17.716 INFO wrote hostname ci-3510.3.8-n-bb3ab03ab7 to /sysroot/etc/hostname Nov 1 00:39:17.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:17.719057 systemd[1]: Finished flatcar-metadata-hostname.service. Nov 1 00:39:17.742903 kernel: audit: type=1130 audit(1761957557.723:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:17.725931 systemd[1]: Starting ignition-files.service... Nov 1 00:39:17.747667 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:39:17.766843 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (973) Nov 1 00:39:17.775730 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:39:17.775764 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:39:17.775778 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:39:17.987182 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:39:18.001211 ignition[992]: INFO : Ignition 2.14.0 Nov 1 00:39:18.001211 ignition[992]: INFO : Stage: files Nov 1 00:39:18.005705 ignition[992]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:39:18.005705 ignition[992]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:39:18.005705 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:39:18.026295 ignition[992]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:39:18.029619 ignition[992]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:39:18.029619 ignition[992]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:39:18.078979 ignition[992]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:39:18.083264 ignition[992]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:39:18.098637 unknown[992]: wrote ssh authorized keys file for user: core Nov 1 00:39:18.101687 ignition[992]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:39:18.122974 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:39:18.128474 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:39:18.233751 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:39:18.310126 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:39:18.316221 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:39:18.316221 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 1 00:39:18.505304 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:39:18.550727 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:39:18.555487 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:39:18.560025 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:39:18.564527 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:39:18.569078 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:39:18.573723 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:39:18.578238 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:39:18.583159 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:39:18.583159 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:39:18.583159 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:39:18.583159 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:39:18.583159 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 00:39:18.583159 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 00:39:18.583159 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Nov 1 00:39:18.583159 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:39:18.624736 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1156864312" Nov 1 00:39:18.624736 ignition[992]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1156864312": device or resource busy Nov 1 00:39:18.624736 ignition[992]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1156864312", trying btrfs: device or resource busy Nov 1 00:39:18.624736 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1156864312" Nov 1 00:39:18.624736 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1156864312" Nov 1 00:39:18.624736 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1156864312" Nov 1 00:39:18.624736 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1156864312" Nov 1 00:39:18.624736 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Nov 1 00:39:18.624736 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Nov 1 00:39:18.624736 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:39:18.682969 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3293230204" Nov 1 00:39:18.688211 ignition[992]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3293230204": device or resource busy Nov 1 00:39:18.688211 ignition[992]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3293230204", trying btrfs: device or resource busy Nov 1 00:39:18.688211 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3293230204" Nov 1 00:39:18.705618 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3293230204" Nov 1 00:39:18.705618 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem3293230204" Nov 1 00:39:18.715034 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem3293230204" Nov 1 00:39:18.715034 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Nov 1 00:39:18.715034 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 00:39:18.715034 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 1 00:39:18.741512 systemd[1]: mnt-oem3293230204.mount: Deactivated successfully. Nov 1 00:39:18.935693 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Nov 1 00:39:19.135988 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 00:39:19.135988 ignition[992]: INFO : files: op(14): [started] processing unit "waagent.service" Nov 1 00:39:19.135988 ignition[992]: INFO : files: op(14): [finished] processing unit "waagent.service" Nov 1 00:39:19.168900 kernel: audit: type=1130 audit(1761957559.144:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.169018 ignition[992]: INFO : files: op(15): [started] processing unit "nvidia.service" Nov 1 00:39:19.169018 ignition[992]: INFO : files: op(15): [finished] processing unit "nvidia.service" Nov 1 00:39:19.169018 ignition[992]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Nov 1 00:39:19.169018 ignition[992]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:39:19.169018 ignition[992]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:39:19.169018 ignition[992]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Nov 1 00:39:19.169018 ignition[992]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:39:19.169018 ignition[992]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:39:19.169018 ignition[992]: INFO : files: op(19): [started] setting preset to enabled for "waagent.service" Nov 1 00:39:19.169018 ignition[992]: INFO : files: op(19): [finished] setting preset to enabled for "waagent.service" Nov 1 00:39:19.169018 ignition[992]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Nov 1 00:39:19.169018 ignition[992]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Nov 1 00:39:19.169018 ignition[992]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:39:19.169018 ignition[992]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:39:19.169018 ignition[992]: INFO : files: files passed Nov 1 00:39:19.169018 ignition[992]: INFO : Ignition finished successfully Nov 1 00:39:19.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.142221 systemd[1]: Finished ignition-files.service. Nov 1 00:39:19.145972 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:39:19.242680 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:39:19.163203 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:39:19.163972 systemd[1]: Starting ignition-quench.service... Nov 1 00:39:19.168750 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:39:19.170058 systemd[1]: Finished ignition-quench.service. Nov 1 00:39:19.215183 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:39:19.235268 systemd[1]: Reached target ignition-complete.target. Nov 1 00:39:19.263261 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:39:19.277037 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:39:19.277149 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:39:19.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.283558 systemd[1]: Reached target initrd-fs.target. Nov 1 00:39:19.287561 systemd[1]: Reached target initrd.target. Nov 1 00:39:19.289527 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:39:19.290366 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:39:19.306644 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:39:19.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.311875 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:39:19.323199 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:39:19.327483 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:39:19.332243 systemd[1]: Stopped target timers.target. Nov 1 00:39:19.336248 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:39:19.338744 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:39:19.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.343257 systemd[1]: Stopped target initrd.target. Nov 1 00:39:19.347242 systemd[1]: Stopped target basic.target. Nov 1 00:39:19.351081 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:39:19.355777 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:39:19.360467 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:39:19.365255 systemd[1]: Stopped target remote-fs.target. Nov 1 00:39:19.369341 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:39:19.373991 systemd[1]: Stopped target sysinit.target. Nov 1 00:39:19.378035 systemd[1]: Stopped target local-fs.target. Nov 1 00:39:19.382097 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:39:19.387338 systemd[1]: Stopped target swap.target. Nov 1 00:39:19.391586 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:39:19.394159 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:39:19.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.398794 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:39:19.403143 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:39:19.405801 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:39:19.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.410369 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:39:19.413405 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:39:19.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.418892 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:39:19.421392 systemd[1]: Stopped ignition-files.service. Nov 1 00:39:19.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.425990 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:39:19.428836 systemd[1]: Stopped flatcar-metadata-hostname.service. Nov 1 00:39:19.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.434583 systemd[1]: Stopping ignition-mount.service... Nov 1 00:39:19.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.436941 systemd[1]: Stopping iscsiuio.service... Nov 1 00:39:19.457013 ignition[1030]: INFO : Ignition 2.14.0 Nov 1 00:39:19.457013 ignition[1030]: INFO : Stage: umount Nov 1 00:39:19.457013 ignition[1030]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:39:19.457013 ignition[1030]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:39:19.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.438891 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:39:19.482118 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:39:19.482118 ignition[1030]: INFO : umount: umount passed Nov 1 00:39:19.439055 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:39:19.492524 ignition[1030]: INFO : Ignition finished successfully Nov 1 00:39:19.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.443133 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:39:19.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.445310 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:39:19.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.445471 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:39:19.448129 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:39:19.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.448283 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:39:19.456777 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:39:19.456910 systemd[1]: Stopped iscsiuio.service. Nov 1 00:39:19.460669 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:39:19.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.460766 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:39:19.489186 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:39:19.489623 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:39:19.489699 systemd[1]: Stopped ignition-mount.service. Nov 1 00:39:19.494028 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:39:19.494073 systemd[1]: Stopped ignition-disks.service. Nov 1 00:39:19.498187 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:39:19.498237 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:39:19.500309 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:39:19.500347 systemd[1]: Stopped ignition-fetch.service. Nov 1 00:39:19.504369 systemd[1]: Stopped target network.target. Nov 1 00:39:19.506506 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:39:19.506560 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:39:19.508929 systemd[1]: Stopped target paths.target. Nov 1 00:39:19.513531 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:39:19.515863 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:39:19.518392 systemd[1]: Stopped target slices.target. Nov 1 00:39:19.520369 systemd[1]: Stopped target sockets.target. Nov 1 00:39:19.522503 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:39:19.522545 systemd[1]: Closed iscsid.socket. Nov 1 00:39:19.524216 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:39:19.524250 systemd[1]: Closed iscsiuio.socket. Nov 1 00:39:19.528497 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:39:19.528539 systemd[1]: Stopped ignition-setup.service. Nov 1 00:39:19.530738 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:39:19.534981 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:39:19.537879 systemd-networkd[831]: eth0: DHCPv6 lease lost Nov 1 00:39:19.545897 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:39:19.545979 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:39:19.555326 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:39:19.570506 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:39:19.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.617790 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:39:19.620000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:39:19.620000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:39:19.617847 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:39:19.623197 systemd[1]: Stopping network-cleanup.service... Nov 1 00:39:19.626374 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:39:19.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.626459 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:39:19.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.630943 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:39:19.630996 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:39:19.635977 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:39:19.636027 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:39:19.640557 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:39:19.654613 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:39:19.658113 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:39:19.660691 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:39:19.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.665755 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:39:19.665879 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:39:19.673320 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:39:19.673374 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:39:19.680253 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:39:19.680309 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:39:19.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.686893 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:39:19.686946 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:39:19.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.694617 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:39:19.694670 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:39:19.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.702195 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:39:19.719106 kernel: hv_netvsc 7c1e5204-780c-7c1e-5204-780c7c1e5204 eth0: Data path switched from VF: enP9179s1 Nov 1 00:39:19.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.704609 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:39:19.704672 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:39:19.716084 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:39:19.716176 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:39:19.732541 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:39:19.735052 systemd[1]: Stopped network-cleanup.service. Nov 1 00:39:19.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.887364 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:39:19.887486 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:39:19.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.892170 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:39:19.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:19.896527 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:39:19.896588 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:39:19.902892 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:39:19.916294 systemd[1]: Switching root. Nov 1 00:39:19.935160 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Nov 1 00:39:19.935219 iscsid[839]: iscsid shutting down. Nov 1 00:39:19.937072 systemd-journald[183]: Journal stopped Nov 1 00:39:34.663353 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:39:34.663380 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:39:34.663391 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:39:34.663402 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:39:34.663409 kernel: SELinux: policy capability open_perms=1 Nov 1 00:39:34.663417 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:39:34.663429 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:39:34.663439 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:39:34.663450 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:39:34.663458 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:39:34.663466 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:39:34.663477 kernel: kauditd_printk_skb: 41 callbacks suppressed Nov 1 00:39:34.663486 kernel: audit: type=1403 audit(1761957562.551:80): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:39:34.663499 systemd[1]: Successfully loaded SELinux policy in 301.550ms. Nov 1 00:39:34.663517 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.360ms. Nov 1 00:39:34.663529 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:39:34.663548 systemd[1]: Detected virtualization microsoft. Nov 1 00:39:34.663563 systemd[1]: Detected architecture x86-64. Nov 1 00:39:34.663573 systemd[1]: Detected first boot. Nov 1 00:39:34.663588 systemd[1]: Hostname set to . Nov 1 00:39:34.663599 systemd[1]: Initializing machine ID from random generator. Nov 1 00:39:34.663611 kernel: audit: type=1400 audit(1761957563.384:81): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:39:34.663624 kernel: audit: type=1400 audit(1761957563.403:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:39:34.663635 kernel: audit: type=1400 audit(1761957563.403:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:39:34.663645 kernel: audit: type=1334 audit(1761957563.416:84): prog-id=10 op=LOAD Nov 1 00:39:34.663656 kernel: audit: type=1334 audit(1761957563.416:85): prog-id=10 op=UNLOAD Nov 1 00:39:34.663667 kernel: audit: type=1334 audit(1761957563.416:86): prog-id=11 op=LOAD Nov 1 00:39:34.663678 kernel: audit: type=1334 audit(1761957563.416:87): prog-id=11 op=UNLOAD Nov 1 00:39:34.663687 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:39:34.663700 kernel: audit: type=1400 audit(1761957564.977:88): avc: denied { associate } for pid=1064 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:39:34.663712 kernel: audit: type=1300 audit(1761957564.977:88): arch=c000003e syscall=188 success=yes exit=0 a0=c000024302 a1=c00002a3d8 a2=c000028840 a3=32 items=0 ppid=1047 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:39:34.663723 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:39:34.663738 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:39:34.663751 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:39:34.663763 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:39:34.663774 kernel: kauditd_printk_skb: 7 callbacks suppressed Nov 1 00:39:34.663785 kernel: audit: type=1334 audit(1761957574.103:90): prog-id=12 op=LOAD Nov 1 00:39:34.663796 kernel: audit: type=1334 audit(1761957574.103:91): prog-id=3 op=UNLOAD Nov 1 00:39:34.663806 kernel: audit: type=1334 audit(1761957574.108:92): prog-id=13 op=LOAD Nov 1 00:39:34.663821 kernel: audit: type=1334 audit(1761957574.113:93): prog-id=14 op=LOAD Nov 1 00:39:34.663921 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:39:34.663943 kernel: audit: type=1334 audit(1761957574.113:94): prog-id=4 op=UNLOAD Nov 1 00:39:34.663959 kernel: audit: type=1334 audit(1761957574.113:95): prog-id=5 op=UNLOAD Nov 1 00:39:34.663976 kernel: audit: type=1131 audit(1761957574.114:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.663993 systemd[1]: Stopped iscsid.service. Nov 1 00:39:34.664011 kernel: audit: type=1334 audit(1761957574.160:97): prog-id=12 op=UNLOAD Nov 1 00:39:34.664026 kernel: audit: type=1131 audit(1761957574.166:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.664044 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:39:34.664058 systemd[1]: Stopped initrd-switch-root.service. Nov 1 00:39:34.664070 kernel: audit: type=1130 audit(1761957574.190:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.664081 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:39:34.664094 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:39:34.664105 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:39:34.664118 systemd[1]: Created slice system-getty.slice. Nov 1 00:39:34.664129 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:39:34.664141 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:39:34.664154 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:39:34.664165 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:39:34.664176 systemd[1]: Created slice user.slice. Nov 1 00:39:34.664187 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:39:34.664198 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:39:34.664209 systemd[1]: Set up automount boot.automount. Nov 1 00:39:34.664222 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:39:34.664231 systemd[1]: Stopped target initrd-switch-root.target. Nov 1 00:39:34.664245 systemd[1]: Stopped target initrd-fs.target. Nov 1 00:39:34.664257 systemd[1]: Stopped target initrd-root-fs.target. Nov 1 00:39:34.664268 systemd[1]: Reached target integritysetup.target. Nov 1 00:39:34.664278 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:39:34.664290 systemd[1]: Reached target remote-fs.target. Nov 1 00:39:34.664303 systemd[1]: Reached target slices.target. Nov 1 00:39:34.664312 systemd[1]: Reached target swap.target. Nov 1 00:39:34.664325 systemd[1]: Reached target torcx.target. Nov 1 00:39:34.664338 systemd[1]: Reached target veritysetup.target. Nov 1 00:39:34.664349 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:39:34.664360 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:39:34.664371 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:39:34.664386 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:39:34.664396 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:39:34.664409 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:39:34.664421 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:39:34.664432 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:39:34.664445 systemd[1]: Mounting media.mount... Nov 1 00:39:34.664456 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:34.664468 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:39:34.664477 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:39:34.664492 systemd[1]: Mounting tmp.mount... Nov 1 00:39:34.664505 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:39:34.664515 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:39:34.664528 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:39:34.664539 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:39:34.664550 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:39:34.664561 systemd[1]: Starting modprobe@drm.service... Nov 1 00:39:34.664573 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:39:34.664586 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:39:34.664598 systemd[1]: Starting modprobe@loop.service... Nov 1 00:39:34.664611 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:39:34.664624 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:39:34.664634 systemd[1]: Stopped systemd-fsck-root.service. Nov 1 00:39:34.664646 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:39:34.664657 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:39:34.664670 systemd[1]: Stopped systemd-journald.service. Nov 1 00:39:34.664681 systemd[1]: Starting systemd-journald.service... Nov 1 00:39:34.664693 kernel: loop: module loaded Nov 1 00:39:34.664707 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:39:34.664717 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:39:34.664730 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:39:34.664741 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:39:34.664753 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:39:34.664764 systemd[1]: Stopped verity-setup.service. Nov 1 00:39:34.664776 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:34.664789 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:39:34.664798 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:39:34.664813 systemd[1]: Mounted media.mount. Nov 1 00:39:34.664838 systemd-journald[1165]: Journal started Nov 1 00:39:34.664881 systemd-journald[1165]: Runtime Journal (/run/log/journal/88cc371e6b2248f18c20fde94e04721d) is 8.0M, max 159.0M, 151.0M free. Nov 1 00:39:22.551000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:39:23.384000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:39:23.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:39:23.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:39:23.416000 audit: BPF prog-id=10 op=LOAD Nov 1 00:39:23.416000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:39:23.416000 audit: BPF prog-id=11 op=LOAD Nov 1 00:39:23.416000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:39:24.977000 audit[1064]: AVC avc: denied { associate } for pid=1064 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:39:24.977000 audit[1064]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c000024302 a1=c00002a3d8 a2=c000028840 a3=32 items=0 ppid=1047 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:39:24.977000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:39:24.984000 audit[1064]: AVC avc: denied { associate } for pid=1064 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:39:24.984000 audit[1064]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0000243d9 a2=1ed a3=0 items=2 ppid=1047 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:39:24.984000 audit: CWD cwd="/" Nov 1 00:39:24.984000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:24.984000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:24.984000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:39:34.103000 audit: BPF prog-id=12 op=LOAD Nov 1 00:39:34.103000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:39:34.108000 audit: BPF prog-id=13 op=LOAD Nov 1 00:39:34.113000 audit: BPF prog-id=14 op=LOAD Nov 1 00:39:34.113000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:39:34.113000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:39:34.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.160000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:39:34.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.564000 audit: BPF prog-id=15 op=LOAD Nov 1 00:39:34.564000 audit: BPF prog-id=16 op=LOAD Nov 1 00:39:34.564000 audit: BPF prog-id=17 op=LOAD Nov 1 00:39:34.564000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:39:34.564000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:39:34.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.659000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:39:34.659000 audit[1165]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc753e8c40 a2=4000 a3=7ffc753e8cdc items=0 ppid=1 pid=1165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:39:34.659000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:39:34.102245 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:39:24.924441 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:39:34.102258 systemd[1]: Unnecessary job was removed for dev-sda6.device. Nov 1 00:39:24.925173 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:24Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:39:34.115085 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:39:24.925196 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:24Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:39:24.925237 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:24Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Nov 1 00:39:24.925248 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:24Z" level=debug msg="skipped missing lower profile" missing profile=oem Nov 1 00:39:24.925296 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:24Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Nov 1 00:39:24.925311 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:24Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Nov 1 00:39:24.925527 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:24Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Nov 1 00:39:24.925586 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:24Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:39:24.925604 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:24Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:39:24.959587 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:24Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Nov 1 00:39:24.959640 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:24Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Nov 1 00:39:24.959661 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:24Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Nov 1 00:39:24.959677 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:24Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Nov 1 00:39:24.959700 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:24Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Nov 1 00:39:24.959713 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:24Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Nov 1 00:39:32.623502 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:32Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:39:32.623749 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:32Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:39:32.623902 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:32Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:39:32.624090 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:32Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:39:32.624143 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:32Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Nov 1 00:39:32.624200 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2025-11-01T00:39:32Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Nov 1 00:39:34.677851 systemd[1]: Started systemd-journald.service. Nov 1 00:39:34.677902 kernel: fuse: init (API version 7.34) Nov 1 00:39:34.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.678355 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:39:34.680854 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:39:34.683350 systemd[1]: Mounted tmp.mount. Nov 1 00:39:34.685251 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:39:34.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.689021 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:39:34.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.691760 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:39:34.691927 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:39:34.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.694458 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:39:34.694600 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:39:34.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.697106 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:39:34.697247 systemd[1]: Finished modprobe@drm.service. Nov 1 00:39:34.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.699536 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:39:34.699678 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:39:34.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.702259 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:39:34.702402 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:39:34.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.704719 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:39:34.704900 systemd[1]: Finished modprobe@loop.service. Nov 1 00:39:34.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.707448 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:39:34.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.710114 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:39:34.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.712786 systemd[1]: Reached target network-pre.target. Nov 1 00:39:34.716046 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:39:34.720248 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:39:34.723191 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:39:34.742485 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:39:34.746257 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:39:34.748530 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:39:34.749689 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:39:34.751951 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:39:34.753264 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:39:34.758845 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:39:34.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.763154 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:39:34.765601 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:39:34.769417 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:39:34.783688 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:39:34.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.786256 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:39:34.796319 systemd-journald[1165]: Time spent on flushing to /var/log/journal/88cc371e6b2248f18c20fde94e04721d is 18.328ms for 1152 entries. Nov 1 00:39:34.796319 systemd-journald[1165]: System Journal (/var/log/journal/88cc371e6b2248f18c20fde94e04721d) is 8.0M, max 2.6G, 2.6G free. Nov 1 00:39:34.883256 systemd-journald[1165]: Received client request to flush runtime journal. Nov 1 00:39:34.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:34.822375 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:39:34.884885 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 00:39:34.826608 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:39:34.850955 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:39:34.884310 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:39:34.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:35.391277 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:39:35.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:36.009987 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:39:36.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:36.013000 audit: BPF prog-id=18 op=LOAD Nov 1 00:39:36.013000 audit: BPF prog-id=19 op=LOAD Nov 1 00:39:36.013000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:39:36.013000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:39:36.014833 systemd[1]: Starting systemd-udevd.service... Nov 1 00:39:36.032420 systemd-udevd[1190]: Using default interface naming scheme 'v252'. Nov 1 00:39:36.430508 systemd[1]: Started systemd-udevd.service. Nov 1 00:39:36.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:36.433000 audit: BPF prog-id=20 op=LOAD Nov 1 00:39:36.435591 systemd[1]: Starting systemd-networkd.service... Nov 1 00:39:36.475579 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Nov 1 00:39:36.512000 audit: BPF prog-id=21 op=LOAD Nov 1 00:39:36.514225 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:39:36.512000 audit: BPF prog-id=22 op=LOAD Nov 1 00:39:36.512000 audit: BPF prog-id=23 op=LOAD Nov 1 00:39:36.549845 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:39:36.570000 audit[1212]: AVC avc: denied { confidentiality } for pid=1212 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:39:36.583939 kernel: hv_vmbus: registering driver hv_balloon Nov 1 00:39:36.589536 systemd[1]: Started systemd-userdbd.service. Nov 1 00:39:36.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:36.595852 kernel: hv_vmbus: registering driver hyperv_fb Nov 1 00:39:36.622983 kernel: hv_utils: Registering HyperV Utility Driver Nov 1 00:39:36.623051 kernel: hv_vmbus: registering driver hv_utils Nov 1 00:39:36.659504 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 1 00:39:36.659594 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 1 00:39:36.668301 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 1 00:39:36.668376 kernel: Console: switching to colour dummy device 80x25 Nov 1 00:39:36.676023 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 00:39:36.676840 kernel: hv_utils: Heartbeat IC version 3.0 Nov 1 00:39:36.676908 kernel: hv_utils: Shutdown IC version 3.2 Nov 1 00:39:36.676942 kernel: hv_utils: TimeSync IC version 4.0 Nov 1 00:39:36.570000 audit[1212]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5645ea264270 a1=f83c a2=7f3b81caebc5 a3=5 items=12 ppid=1190 pid=1212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:39:36.570000 audit: CWD cwd="/" Nov 1 00:39:36.570000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:36.570000 audit: PATH item=1 name=(null) inode=15913 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:36.570000 audit: PATH item=2 name=(null) inode=15913 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:36.570000 audit: PATH item=3 name=(null) inode=15914 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:36.570000 audit: PATH item=4 name=(null) inode=15913 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:36.570000 audit: PATH item=5 name=(null) inode=15915 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:36.570000 audit: PATH item=6 name=(null) inode=15913 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:36.570000 audit: PATH item=7 name=(null) inode=15916 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:36.570000 audit: PATH item=8 name=(null) inode=15913 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:36.570000 audit: PATH item=9 name=(null) inode=15917 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:36.570000 audit: PATH item=10 name=(null) inode=15913 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:36.570000 audit: PATH item=11 name=(null) inode=15918 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:36.570000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:39:37.768113 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:39:37.816996 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Nov 1 00:39:37.842179 systemd-networkd[1196]: lo: Link UP Nov 1 00:39:37.842466 systemd-networkd[1196]: lo: Gained carrier Nov 1 00:39:37.843093 systemd-networkd[1196]: Enumeration completed Nov 1 00:39:37.843219 systemd[1]: Started systemd-networkd.service. Nov 1 00:39:37.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:37.847206 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:39:37.858041 systemd-networkd[1196]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:39:37.858670 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:39:37.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:37.862480 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:39:37.920035 kernel: mlx5_core 23db:00:02.0 enP9179s1: Link up Nov 1 00:39:37.920399 kernel: buffer_size[0]=0 is not enough for lossless buffer Nov 1 00:39:37.944995 kernel: hv_netvsc 7c1e5204-780c-7c1e-5204-780c7c1e5204 eth0: Data path switched to VF: enP9179s1 Nov 1 00:39:37.945326 systemd-networkd[1196]: enP9179s1: Link UP Nov 1 00:39:37.945587 systemd-networkd[1196]: eth0: Link UP Nov 1 00:39:37.945686 systemd-networkd[1196]: eth0: Gained carrier Nov 1 00:39:37.950252 systemd-networkd[1196]: enP9179s1: Gained carrier Nov 1 00:39:37.978089 systemd-networkd[1196]: eth0: DHCPv4 address 10.200.4.33/24, gateway 10.200.4.1 acquired from 168.63.129.16 Nov 1 00:39:38.209547 lvm[1267]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:39:38.256112 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:39:38.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:38.262045 systemd[1]: Reached target cryptsetup.target. Nov 1 00:39:38.265621 systemd[1]: Starting lvm2-activation.service... Nov 1 00:39:38.271832 lvm[1268]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:39:38.294801 systemd[1]: Finished lvm2-activation.service. Nov 1 00:39:38.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:38.297196 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:39:38.299418 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:39:38.299448 systemd[1]: Reached target local-fs.target. Nov 1 00:39:38.301528 systemd[1]: Reached target machines.target. Nov 1 00:39:38.305048 systemd[1]: Starting ldconfig.service... Nov 1 00:39:38.307487 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:39:38.307593 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:39:38.308756 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:39:38.312060 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:39:38.315777 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:39:38.319219 systemd[1]: Starting systemd-sysext.service... Nov 1 00:39:38.388837 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1270 (bootctl) Nov 1 00:39:38.390214 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:39:38.468858 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:39:38.563459 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:39:38.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:38.703113 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:39:38.703356 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:39:38.805005 kernel: loop0: detected capacity change from 0 to 229808 Nov 1 00:39:39.428001 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:39:39.445999 kernel: loop1: detected capacity change from 0 to 229808 Nov 1 00:39:39.459124 (sd-sysext)[1282]: Using extensions 'kubernetes'. Nov 1 00:39:39.459559 (sd-sysext)[1282]: Merged extensions into '/usr'. Nov 1 00:39:39.475424 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:39.477066 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:39:39.479630 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:39:39.481298 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:39:39.485935 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:39:39.489655 systemd[1]: Starting modprobe@loop.service... Nov 1 00:39:39.491784 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:39:39.492013 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:39:39.492190 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:39.493332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:39:39.493481 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:39:39.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:39.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:39.496551 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:39:39.496693 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:39:39.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:39.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:39.499585 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:39:39.499723 systemd[1]: Finished modprobe@loop.service. Nov 1 00:39:39.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:39.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:39.502817 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:39:39.502960 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:39:39.739426 systemd-networkd[1196]: eth0: Gained IPv6LL Nov 1 00:39:39.745331 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:39:39.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:39.763339 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:39:39.767302 systemd[1]: Finished systemd-sysext.service. Nov 1 00:39:39.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:39.771900 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:39:39.772574 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:39:39.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:39.776996 systemd[1]: Starting ensure-sysext.service... Nov 1 00:39:39.780463 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:39:39.789040 systemd[1]: Reloading. Nov 1 00:39:39.847457 /usr/lib/systemd/system-generators/torcx-generator[1309]: time="2025-11-01T00:39:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:39:39.847494 /usr/lib/systemd/system-generators/torcx-generator[1309]: time="2025-11-01T00:39:39Z" level=info msg="torcx already run" Nov 1 00:39:39.867538 systemd-fsck[1279]: fsck.fat 4.2 (2021-01-31) Nov 1 00:39:39.867538 systemd-fsck[1279]: /dev/sda1: 790 files, 120773/258078 clusters Nov 1 00:39:39.957859 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:39:39.957877 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:39:39.960146 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:39:39.974354 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:39:40.047449 kernel: kauditd_printk_skb: 78 callbacks suppressed Nov 1 00:39:40.047553 kernel: audit: type=1334 audit(1761957580.038:161): prog-id=24 op=LOAD Nov 1 00:39:40.038000 audit: BPF prog-id=24 op=LOAD Nov 1 00:39:40.051652 kernel: audit: type=1334 audit(1761957580.038:162): prog-id=21 op=UNLOAD Nov 1 00:39:40.038000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:39:40.055549 kernel: audit: type=1334 audit(1761957580.042:163): prog-id=25 op=LOAD Nov 1 00:39:40.042000 audit: BPF prog-id=25 op=LOAD Nov 1 00:39:40.059297 kernel: audit: type=1334 audit(1761957580.050:164): prog-id=26 op=LOAD Nov 1 00:39:40.050000 audit: BPF prog-id=26 op=LOAD Nov 1 00:39:40.063135 kernel: audit: type=1334 audit(1761957580.050:165): prog-id=22 op=UNLOAD Nov 1 00:39:40.050000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:39:40.066713 kernel: audit: type=1334 audit(1761957580.050:166): prog-id=23 op=UNLOAD Nov 1 00:39:40.050000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:39:40.070174 kernel: audit: type=1334 audit(1761957580.054:167): prog-id=27 op=LOAD Nov 1 00:39:40.054000 audit: BPF prog-id=27 op=LOAD Nov 1 00:39:40.073974 kernel: audit: type=1334 audit(1761957580.054:168): prog-id=28 op=LOAD Nov 1 00:39:40.054000 audit: BPF prog-id=28 op=LOAD Nov 1 00:39:40.054000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:39:40.082132 kernel: audit: type=1334 audit(1761957580.054:169): prog-id=18 op=UNLOAD Nov 1 00:39:40.082191 kernel: audit: type=1334 audit(1761957580.054:170): prog-id=19 op=UNLOAD Nov 1 00:39:40.054000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:39:40.058000 audit: BPF prog-id=29 op=LOAD Nov 1 00:39:40.058000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:39:40.062000 audit: BPF prog-id=30 op=LOAD Nov 1 00:39:40.062000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:39:40.062000 audit: BPF prog-id=31 op=LOAD Nov 1 00:39:40.065000 audit: BPF prog-id=32 op=LOAD Nov 1 00:39:40.065000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:39:40.065000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:39:40.083541 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:39:40.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.095417 systemd[1]: Mounting boot.mount... Nov 1 00:39:40.101636 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:40.101923 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:39:40.103345 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:39:40.107165 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:39:40.111099 systemd[1]: Starting modprobe@loop.service... Nov 1 00:39:40.113296 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:39:40.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.113498 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:39:40.113694 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:40.114852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:39:40.114970 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:39:40.119623 systemd[1]: Mounted boot.mount. Nov 1 00:39:40.121949 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:39:40.122219 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:39:40.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.125506 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:39:40.125641 systemd[1]: Finished modprobe@loop.service. Nov 1 00:39:40.128417 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:39:40.128533 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:39:40.132792 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:40.133156 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:39:40.134705 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:39:40.137124 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:39:40.138546 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:39:40.142147 systemd[1]: Starting modprobe@loop.service... Nov 1 00:39:40.144472 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:39:40.144641 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:39:40.144801 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:40.146034 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:39:40.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.149127 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:39:40.149271 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:39:40.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.152102 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:39:40.152240 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:39:40.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.155552 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:39:40.155689 systemd[1]: Finished modprobe@loop.service. Nov 1 00:39:40.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.158516 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:39:40.158673 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:39:40.161475 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:40.161802 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:39:40.163090 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:39:40.166747 systemd[1]: Starting modprobe@drm.service... Nov 1 00:39:40.169963 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:39:40.173477 systemd[1]: Starting modprobe@loop.service... Nov 1 00:39:40.175411 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:39:40.175619 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:39:40.175809 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:40.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.177075 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:39:40.177253 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:39:40.180106 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:39:40.180246 systemd[1]: Finished modprobe@drm.service. Nov 1 00:39:40.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.182676 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:39:40.182810 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:39:40.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.185722 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:39:40.185860 systemd[1]: Finished modprobe@loop.service. Nov 1 00:39:40.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.189916 systemd[1]: Finished ensure-sysext.service. Nov 1 00:39:40.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:40.192505 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:39:40.192548 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:39:40.227407 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:39:41.080286 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:39:41.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:41.085465 systemd[1]: Starting audit-rules.service... Nov 1 00:39:41.088574 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:39:41.092156 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:39:41.094000 audit: BPF prog-id=33 op=LOAD Nov 1 00:39:41.097030 systemd[1]: Starting systemd-resolved.service... Nov 1 00:39:41.101000 audit: BPF prog-id=34 op=LOAD Nov 1 00:39:41.104064 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:39:41.107808 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:39:41.153458 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:39:41.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:41.156348 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:39:41.173000 audit[1392]: SYSTEM_BOOT pid=1392 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:39:41.180320 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:39:41.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:41.198270 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:39:41.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:41.201437 systemd[1]: Reached target time-set.target. Nov 1 00:39:41.259825 systemd-resolved[1389]: Positive Trust Anchors: Nov 1 00:39:41.259844 systemd-resolved[1389]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:39:41.259884 systemd-resolved[1389]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:39:41.289143 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:39:41.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:41.356954 systemd-resolved[1389]: Using system hostname 'ci-3510.3.8-n-bb3ab03ab7'. Nov 1 00:39:41.358585 systemd[1]: Started systemd-resolved.service. Nov 1 00:39:41.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:41.361587 systemd[1]: Reached target network.target. Nov 1 00:39:41.363969 systemd[1]: Reached target network-online.target. Nov 1 00:39:41.366703 systemd[1]: Reached target nss-lookup.target. Nov 1 00:39:41.461000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:39:41.461000 audit[1407]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffe50e2620 a2=420 a3=0 items=0 ppid=1386 pid=1407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:39:41.461000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:39:41.462577 augenrules[1407]: No rules Nov 1 00:39:41.463343 systemd[1]: Finished audit-rules.service. Nov 1 00:39:41.520757 systemd-timesyncd[1390]: Contacted time server 185.137.221.158:123 (0.flatcar.pool.ntp.org). Nov 1 00:39:41.520837 systemd-timesyncd[1390]: Initial clock synchronization to Sat 2025-11-01 00:39:41.521166 UTC. Nov 1 00:39:46.805150 ldconfig[1269]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:39:46.815398 systemd[1]: Finished ldconfig.service. Nov 1 00:39:46.819636 systemd[1]: Starting systemd-update-done.service... Nov 1 00:39:46.826459 systemd[1]: Finished systemd-update-done.service. Nov 1 00:39:46.829121 systemd[1]: Reached target sysinit.target. Nov 1 00:39:46.831370 systemd[1]: Started motdgen.path. Nov 1 00:39:46.833269 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:39:46.836484 systemd[1]: Started logrotate.timer. Nov 1 00:39:46.838726 systemd[1]: Started mdadm.timer. Nov 1 00:39:46.840712 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:39:46.843040 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:39:46.843080 systemd[1]: Reached target paths.target. Nov 1 00:39:46.845204 systemd[1]: Reached target timers.target. Nov 1 00:39:46.847702 systemd[1]: Listening on dbus.socket. Nov 1 00:39:46.850634 systemd[1]: Starting docker.socket... Nov 1 00:39:46.854994 systemd[1]: Listening on sshd.socket. Nov 1 00:39:46.857486 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:39:46.857915 systemd[1]: Listening on docker.socket. Nov 1 00:39:46.860430 systemd[1]: Reached target sockets.target. Nov 1 00:39:46.862759 systemd[1]: Reached target basic.target. Nov 1 00:39:46.864903 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:39:46.864947 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:39:46.865883 systemd[1]: Starting containerd.service... Nov 1 00:39:46.869187 systemd[1]: Starting dbus.service... Nov 1 00:39:46.871802 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:39:46.875097 systemd[1]: Starting extend-filesystems.service... Nov 1 00:39:46.877296 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:39:46.879499 systemd[1]: Starting kubelet.service... Nov 1 00:39:46.882772 systemd[1]: Starting motdgen.service... Nov 1 00:39:46.885639 systemd[1]: Started nvidia.service. Nov 1 00:39:46.889312 systemd[1]: Starting prepare-helm.service... Nov 1 00:39:46.892921 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:39:46.896440 systemd[1]: Starting sshd-keygen.service... Nov 1 00:39:46.906716 systemd[1]: Starting systemd-logind.service... Nov 1 00:39:46.908843 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:39:46.908946 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:39:46.909858 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:39:46.912520 systemd[1]: Starting update-engine.service... Nov 1 00:39:46.915736 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:39:46.936306 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:39:46.936529 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:39:46.958676 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:39:46.958895 systemd[1]: Finished motdgen.service. Nov 1 00:39:46.970858 jq[1431]: true Nov 1 00:39:46.970431 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:39:46.971178 jq[1417]: false Nov 1 00:39:46.970664 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:39:46.996068 jq[1448]: true Nov 1 00:39:47.011601 extend-filesystems[1418]: Found loop1 Nov 1 00:39:47.015410 extend-filesystems[1418]: Found sda Nov 1 00:39:47.015410 extend-filesystems[1418]: Found sda1 Nov 1 00:39:47.015410 extend-filesystems[1418]: Found sda2 Nov 1 00:39:47.015410 extend-filesystems[1418]: Found sda3 Nov 1 00:39:47.015410 extend-filesystems[1418]: Found usr Nov 1 00:39:47.015410 extend-filesystems[1418]: Found sda4 Nov 1 00:39:47.015410 extend-filesystems[1418]: Found sda6 Nov 1 00:39:47.015410 extend-filesystems[1418]: Found sda7 Nov 1 00:39:47.034608 extend-filesystems[1418]: Found sda9 Nov 1 00:39:47.034608 extend-filesystems[1418]: Checking size of /dev/sda9 Nov 1 00:39:47.057364 env[1441]: time="2025-11-01T00:39:47.057260269Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:39:47.067233 systemd-logind[1429]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:39:47.071137 systemd-logind[1429]: New seat seat0. Nov 1 00:39:47.093370 extend-filesystems[1418]: Old size kept for /dev/sda9 Nov 1 00:39:47.107479 extend-filesystems[1418]: Found sr0 Nov 1 00:39:47.095346 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:39:47.111911 tar[1434]: linux-amd64/LICENSE Nov 1 00:39:47.095505 systemd[1]: Finished extend-filesystems.service. Nov 1 00:39:47.113151 tar[1434]: linux-amd64/helm Nov 1 00:39:47.186086 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:39:47.188286 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:39:47.214329 env[1441]: time="2025-11-01T00:39:47.213233653Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:39:47.214329 env[1441]: time="2025-11-01T00:39:47.213388156Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:39:47.237529 env[1441]: time="2025-11-01T00:39:47.237479110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:39:47.237529 env[1441]: time="2025-11-01T00:39:47.237527811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:39:47.238662 env[1441]: time="2025-11-01T00:39:47.237803517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:39:47.238662 env[1441]: time="2025-11-01T00:39:47.237836518Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:39:47.238662 env[1441]: time="2025-11-01T00:39:47.237855718Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:39:47.238662 env[1441]: time="2025-11-01T00:39:47.237871319Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:39:47.238662 env[1441]: time="2025-11-01T00:39:47.237970321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:39:47.238662 env[1441]: time="2025-11-01T00:39:47.238245927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:39:47.238662 env[1441]: time="2025-11-01T00:39:47.238422231Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:39:47.238662 env[1441]: time="2025-11-01T00:39:47.238445232Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:39:47.238662 env[1441]: time="2025-11-01T00:39:47.238499533Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:39:47.238662 env[1441]: time="2025-11-01T00:39:47.238515533Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:39:47.255336 env[1441]: time="2025-11-01T00:39:47.255301019Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:39:47.255448 env[1441]: time="2025-11-01T00:39:47.255372821Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:39:47.255448 env[1441]: time="2025-11-01T00:39:47.255395321Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:39:47.255528 env[1441]: time="2025-11-01T00:39:47.255445122Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:39:47.255528 env[1441]: time="2025-11-01T00:39:47.255468023Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:39:47.255528 env[1441]: time="2025-11-01T00:39:47.255491723Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:39:47.255528 env[1441]: time="2025-11-01T00:39:47.255522924Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:39:47.255684 env[1441]: time="2025-11-01T00:39:47.255544025Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:39:47.255684 env[1441]: time="2025-11-01T00:39:47.255563225Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:39:47.255684 env[1441]: time="2025-11-01T00:39:47.255582626Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:39:47.255684 env[1441]: time="2025-11-01T00:39:47.255618226Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:39:47.255684 env[1441]: time="2025-11-01T00:39:47.255636527Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:39:47.255857 env[1441]: time="2025-11-01T00:39:47.255815031Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:39:47.256864 env[1441]: time="2025-11-01T00:39:47.255944234Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:39:47.256864 env[1441]: time="2025-11-01T00:39:47.256385044Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:39:47.256864 env[1441]: time="2025-11-01T00:39:47.256423945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:39:47.256864 env[1441]: time="2025-11-01T00:39:47.256457446Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:39:47.256864 env[1441]: time="2025-11-01T00:39:47.256536947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:39:47.256864 env[1441]: time="2025-11-01T00:39:47.256555548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:39:47.256864 env[1441]: time="2025-11-01T00:39:47.256572748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:39:47.256864 env[1441]: time="2025-11-01T00:39:47.256642650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:39:47.256864 env[1441]: time="2025-11-01T00:39:47.256672651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:39:47.256864 env[1441]: time="2025-11-01T00:39:47.256691951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:39:47.256864 env[1441]: time="2025-11-01T00:39:47.256708451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:39:47.256864 env[1441]: time="2025-11-01T00:39:47.256725752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:39:47.256864 env[1441]: time="2025-11-01T00:39:47.256756353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:39:47.257383 env[1441]: time="2025-11-01T00:39:47.256925656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:39:47.257383 env[1441]: time="2025-11-01T00:39:47.256947257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:39:47.257383 env[1441]: time="2025-11-01T00:39:47.256963657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:39:47.257383 env[1441]: time="2025-11-01T00:39:47.257011258Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:39:47.257383 env[1441]: time="2025-11-01T00:39:47.257036759Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:39:47.257383 env[1441]: time="2025-11-01T00:39:47.257053459Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:39:47.257383 env[1441]: time="2025-11-01T00:39:47.257090860Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:39:47.257383 env[1441]: time="2025-11-01T00:39:47.257132761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:39:47.257666 env[1441]: time="2025-11-01T00:39:47.257441768Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:39:47.257666 env[1441]: time="2025-11-01T00:39:47.257530070Z" level=info msg="Connect containerd service" Nov 1 00:39:47.257666 env[1441]: time="2025-11-01T00:39:47.257588072Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:39:47.293404 env[1441]: time="2025-11-01T00:39:47.258455192Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:39:47.293404 env[1441]: time="2025-11-01T00:39:47.258775599Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:39:47.293404 env[1441]: time="2025-11-01T00:39:47.258833500Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:39:47.293404 env[1441]: time="2025-11-01T00:39:47.261457861Z" level=info msg="containerd successfully booted in 0.205838s" Nov 1 00:39:47.293404 env[1441]: time="2025-11-01T00:39:47.262663688Z" level=info msg="Start subscribing containerd event" Nov 1 00:39:47.293404 env[1441]: time="2025-11-01T00:39:47.262811692Z" level=info msg="Start recovering state" Nov 1 00:39:47.293404 env[1441]: time="2025-11-01T00:39:47.262965695Z" level=info msg="Start event monitor" Nov 1 00:39:47.293404 env[1441]: time="2025-11-01T00:39:47.263074698Z" level=info msg="Start snapshots syncer" Nov 1 00:39:47.293404 env[1441]: time="2025-11-01T00:39:47.263088998Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:39:47.293404 env[1441]: time="2025-11-01T00:39:47.263099598Z" level=info msg="Start streaming server" Nov 1 00:39:47.267192 dbus-daemon[1416]: [system] SELinux support is enabled Nov 1 00:39:47.258959 systemd[1]: Started containerd.service. Nov 1 00:39:47.267345 systemd[1]: Started dbus.service. Nov 1 00:39:47.272163 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:39:47.272192 systemd[1]: Reached target system-config.target. Nov 1 00:39:47.274950 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:39:47.274972 systemd[1]: Reached target user-config.target. Nov 1 00:39:47.278820 systemd[1]: Started systemd-logind.service. Nov 1 00:39:47.299727 systemd[1]: nvidia.service: Deactivated successfully. Nov 1 00:39:47.844002 update_engine[1430]: I1101 00:39:47.843355 1430 main.cc:92] Flatcar Update Engine starting Nov 1 00:39:47.906645 systemd[1]: Started update-engine.service. Nov 1 00:39:47.907345 update_engine[1430]: I1101 00:39:47.907248 1430 update_check_scheduler.cc:74] Next update check in 9m55s Nov 1 00:39:47.911524 systemd[1]: Started locksmithd.service. Nov 1 00:39:48.057995 tar[1434]: linux-amd64/README.md Nov 1 00:39:48.067842 systemd[1]: Finished prepare-helm.service. Nov 1 00:39:48.429326 sshd_keygen[1444]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:39:48.459765 systemd[1]: Finished sshd-keygen.service. Nov 1 00:39:48.463895 systemd[1]: Starting issuegen.service... Nov 1 00:39:48.467876 systemd[1]: Started waagent.service. Nov 1 00:39:48.479019 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:39:48.479225 systemd[1]: Finished issuegen.service. Nov 1 00:39:48.482687 systemd[1]: Started kubelet.service. Nov 1 00:39:48.486671 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:39:48.511840 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:39:48.516296 systemd[1]: Started getty@tty1.service. Nov 1 00:39:48.520491 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 00:39:48.523583 systemd[1]: Reached target getty.target. Nov 1 00:39:48.526185 systemd[1]: Reached target multi-user.target. Nov 1 00:39:48.530495 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:39:48.538959 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:39:48.539169 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:39:48.542388 systemd[1]: Startup finished in 918ms (firmware) + 18.458s (loader) + 961ms (kernel) + 14.304s (initrd) + 25.608s (userspace) = 1min 251ms. Nov 1 00:39:48.980317 login[1536]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 00:39:48.982115 login[1537]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 00:39:49.010251 systemd[1]: Created slice user-500.slice. Nov 1 00:39:49.011885 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:39:49.014963 systemd-logind[1429]: New session 2 of user core. Nov 1 00:39:49.019483 systemd-logind[1429]: New session 1 of user core. Nov 1 00:39:49.027854 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:39:49.029694 systemd[1]: Starting user@500.service... Nov 1 00:39:49.050266 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:49.076548 kubelet[1533]: E1101 00:39:49.076503 1533 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:39:49.078362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:39:49.078528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:39:49.078868 systemd[1]: kubelet.service: Consumed 1.079s CPU time. Nov 1 00:39:49.241904 systemd[1546]: Queued start job for default target default.target. Nov 1 00:39:49.242607 systemd[1546]: Reached target paths.target. Nov 1 00:39:49.242643 systemd[1546]: Reached target sockets.target. Nov 1 00:39:49.242666 systemd[1546]: Reached target timers.target. Nov 1 00:39:49.242685 systemd[1546]: Reached target basic.target. Nov 1 00:39:49.242816 systemd[1]: Started user@500.service. Nov 1 00:39:49.244285 systemd[1]: Started session-1.scope. Nov 1 00:39:49.245044 systemd[1546]: Reached target default.target. Nov 1 00:39:49.245097 systemd[1546]: Startup finished in 185ms. Nov 1 00:39:49.245380 systemd[1]: Started session-2.scope. Nov 1 00:39:49.408255 locksmithd[1514]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:39:54.160626 waagent[1525]: 2025-11-01T00:39:54.160517Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Nov 1 00:39:54.162060 waagent[1525]: 2025-11-01T00:39:54.161991Z INFO Daemon Daemon OS: flatcar 3510.3.8 Nov 1 00:39:54.163122 waagent[1525]: 2025-11-01T00:39:54.163067Z INFO Daemon Daemon Python: 3.9.16 Nov 1 00:39:54.164303 waagent[1525]: 2025-11-01T00:39:54.164245Z INFO Daemon Daemon Run daemon Nov 1 00:39:54.165723 waagent[1525]: 2025-11-01T00:39:54.165671Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Nov 1 00:39:54.178777 waagent[1525]: 2025-11-01T00:39:54.178655Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Nov 1 00:39:54.187379 waagent[1525]: 2025-11-01T00:39:54.187279Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 1 00:39:54.192760 waagent[1525]: 2025-11-01T00:39:54.192695Z INFO Daemon Daemon cloud-init is enabled: False Nov 1 00:39:54.195506 waagent[1525]: 2025-11-01T00:39:54.195445Z INFO Daemon Daemon Using waagent for provisioning Nov 1 00:39:54.198924 waagent[1525]: 2025-11-01T00:39:54.198861Z INFO Daemon Daemon Activate resource disk Nov 1 00:39:54.201634 waagent[1525]: 2025-11-01T00:39:54.201572Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 1 00:39:54.211965 waagent[1525]: 2025-11-01T00:39:54.211898Z INFO Daemon Daemon Found device: None Nov 1 00:39:54.215093 waagent[1525]: 2025-11-01T00:39:54.215032Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 1 00:39:54.219702 waagent[1525]: 2025-11-01T00:39:54.219639Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 1 00:39:54.226137 waagent[1525]: 2025-11-01T00:39:54.226076Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 1 00:39:54.229501 waagent[1525]: 2025-11-01T00:39:54.229440Z INFO Daemon Daemon Running default provisioning handler Nov 1 00:39:54.240310 waagent[1525]: 2025-11-01T00:39:54.240187Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Nov 1 00:39:54.249866 waagent[1525]: 2025-11-01T00:39:54.249761Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 1 00:39:54.255087 waagent[1525]: 2025-11-01T00:39:54.255026Z INFO Daemon Daemon cloud-init is enabled: False Nov 1 00:39:54.257945 waagent[1525]: 2025-11-01T00:39:54.257874Z INFO Daemon Daemon Copying ovf-env.xml Nov 1 00:39:54.326220 waagent[1525]: 2025-11-01T00:39:54.321585Z INFO Daemon Daemon Successfully mounted dvd Nov 1 00:39:54.418314 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 1 00:39:54.457582 waagent[1525]: 2025-11-01T00:39:54.457434Z INFO Daemon Daemon Detect protocol endpoint Nov 1 00:39:54.474701 waagent[1525]: 2025-11-01T00:39:54.458000Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 1 00:39:54.474701 waagent[1525]: 2025-11-01T00:39:54.459365Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 1 00:39:54.474701 waagent[1525]: 2025-11-01T00:39:54.460386Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 1 00:39:54.474701 waagent[1525]: 2025-11-01T00:39:54.461705Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 1 00:39:54.474701 waagent[1525]: 2025-11-01T00:39:54.462673Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 1 00:39:54.642532 waagent[1525]: 2025-11-01T00:39:54.642455Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 1 00:39:54.651093 waagent[1525]: 2025-11-01T00:39:54.643422Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 1 00:39:54.651093 waagent[1525]: 2025-11-01T00:39:54.644620Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 1 00:39:54.974817 waagent[1525]: 2025-11-01T00:39:54.974650Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 1 00:39:54.983896 waagent[1525]: 2025-11-01T00:39:54.983818Z INFO Daemon Daemon Forcing an update of the goal state.. Nov 1 00:39:54.989677 waagent[1525]: 2025-11-01T00:39:54.984172Z INFO Daemon Daemon Fetching goal state [incarnation 1] Nov 1 00:39:55.162291 waagent[1525]: 2025-11-01T00:39:55.162160Z INFO Daemon Daemon Found private key matching thumbprint 078933A43F22EA4617E974A3E54547C49A55B49A Nov 1 00:39:55.171775 waagent[1525]: 2025-11-01T00:39:55.167624Z INFO Daemon Daemon Fetch goal state completed Nov 1 00:39:55.198310 waagent[1525]: 2025-11-01T00:39:55.198242Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: ebca20af-2fbd-48a3-bb51-7be28199e567 New eTag: 18063216063919623911] Nov 1 00:39:55.204042 waagent[1525]: 2025-11-01T00:39:55.203957Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Nov 1 00:39:55.254169 waagent[1525]: 2025-11-01T00:39:55.254029Z INFO Daemon Daemon Starting provisioning Nov 1 00:39:55.257487 waagent[1525]: 2025-11-01T00:39:55.257406Z INFO Daemon Daemon Handle ovf-env.xml. Nov 1 00:39:55.260262 waagent[1525]: 2025-11-01T00:39:55.260194Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-bb3ab03ab7] Nov 1 00:39:55.284545 waagent[1525]: 2025-11-01T00:39:55.284415Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-bb3ab03ab7] Nov 1 00:39:55.294272 waagent[1525]: 2025-11-01T00:39:55.285234Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 1 00:39:55.294272 waagent[1525]: 2025-11-01T00:39:55.286495Z INFO Daemon Daemon Primary interface is [eth0] Nov 1 00:39:55.301074 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Nov 1 00:39:55.301337 systemd[1]: Stopped systemd-networkd-wait-online.service. Nov 1 00:39:55.301411 systemd[1]: Stopping systemd-networkd-wait-online.service... Nov 1 00:39:55.301777 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:39:55.310039 systemd-networkd[1196]: eth0: DHCPv6 lease lost Nov 1 00:39:55.311353 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:39:55.311548 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:39:55.313865 systemd[1]: Starting systemd-networkd.service... Nov 1 00:39:55.345574 systemd-networkd[1584]: enP9179s1: Link UP Nov 1 00:39:55.345586 systemd-networkd[1584]: enP9179s1: Gained carrier Nov 1 00:39:55.347068 systemd-networkd[1584]: eth0: Link UP Nov 1 00:39:55.347076 systemd-networkd[1584]: eth0: Gained carrier Nov 1 00:39:55.347526 systemd-networkd[1584]: lo: Link UP Nov 1 00:39:55.347535 systemd-networkd[1584]: lo: Gained carrier Nov 1 00:39:55.347850 systemd-networkd[1584]: eth0: Gained IPv6LL Nov 1 00:39:55.348151 systemd-networkd[1584]: Enumeration completed Nov 1 00:39:55.348243 systemd[1]: Started systemd-networkd.service. Nov 1 00:39:55.350265 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:39:55.353377 systemd-networkd[1584]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:39:55.356020 waagent[1525]: 2025-11-01T00:39:55.355837Z INFO Daemon Daemon Create user account if not exists Nov 1 00:39:55.359694 waagent[1525]: 2025-11-01T00:39:55.356575Z INFO Daemon Daemon User core already exists, skip useradd Nov 1 00:39:55.359694 waagent[1525]: 2025-11-01T00:39:55.358058Z INFO Daemon Daemon Configure sudoer Nov 1 00:39:55.364536 waagent[1525]: 2025-11-01T00:39:55.364460Z INFO Daemon Daemon Configure sshd Nov 1 00:39:55.364926 waagent[1525]: 2025-11-01T00:39:55.364867Z INFO Daemon Daemon Deploy ssh public key. Nov 1 00:39:55.422052 systemd-networkd[1584]: eth0: DHCPv4 address 10.200.4.33/24, gateway 10.200.4.1 acquired from 168.63.129.16 Nov 1 00:39:55.424964 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:39:56.528575 waagent[1525]: 2025-11-01T00:39:56.528463Z INFO Daemon Daemon Provisioning complete Nov 1 00:39:56.546286 waagent[1525]: 2025-11-01T00:39:56.546206Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 1 00:39:56.552295 waagent[1525]: 2025-11-01T00:39:56.552219Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 1 00:39:56.561850 waagent[1525]: 2025-11-01T00:39:56.561777Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Nov 1 00:39:56.828185 waagent[1590]: 2025-11-01T00:39:56.828002Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Nov 1 00:39:56.828914 waagent[1590]: 2025-11-01T00:39:56.828841Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:39:56.829079 waagent[1590]: 2025-11-01T00:39:56.829019Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:39:56.840258 waagent[1590]: 2025-11-01T00:39:56.840183Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Nov 1 00:39:56.840417 waagent[1590]: 2025-11-01T00:39:56.840359Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Nov 1 00:39:56.891389 waagent[1590]: 2025-11-01T00:39:56.891269Z INFO ExtHandler ExtHandler Found private key matching thumbprint 078933A43F22EA4617E974A3E54547C49A55B49A Nov 1 00:39:56.891684 waagent[1590]: 2025-11-01T00:39:56.891625Z INFO ExtHandler ExtHandler Fetch goal state completed Nov 1 00:39:56.906677 waagent[1590]: 2025-11-01T00:39:56.906611Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: deaf143e-dbee-4f0c-80dc-e3f822011396 New eTag: 18063216063919623911] Nov 1 00:39:56.907200 waagent[1590]: 2025-11-01T00:39:56.907139Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Nov 1 00:39:56.985078 waagent[1590]: 2025-11-01T00:39:56.984908Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 1 00:39:56.995614 waagent[1590]: 2025-11-01T00:39:56.995537Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1590 Nov 1 00:39:56.998926 waagent[1590]: 2025-11-01T00:39:56.998857Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Nov 1 00:39:57.000111 waagent[1590]: 2025-11-01T00:39:57.000051Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 1 00:39:57.082858 waagent[1590]: 2025-11-01T00:39:57.082715Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 1 00:39:57.083398 waagent[1590]: 2025-11-01T00:39:57.083317Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 1 00:39:57.091744 waagent[1590]: 2025-11-01T00:39:57.091687Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 1 00:39:57.092248 waagent[1590]: 2025-11-01T00:39:57.092184Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Nov 1 00:39:57.093324 waagent[1590]: 2025-11-01T00:39:57.093255Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Nov 1 00:39:57.094611 waagent[1590]: 2025-11-01T00:39:57.094548Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 1 00:39:57.095691 waagent[1590]: 2025-11-01T00:39:57.095633Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 1 00:39:57.095822 waagent[1590]: 2025-11-01T00:39:57.095746Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:39:57.096291 waagent[1590]: 2025-11-01T00:39:57.096229Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 1 00:39:57.096653 waagent[1590]: 2025-11-01T00:39:57.096599Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:39:57.097141 waagent[1590]: 2025-11-01T00:39:57.097085Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 1 00:39:57.097664 waagent[1590]: 2025-11-01T00:39:57.097609Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 1 00:39:57.097923 waagent[1590]: 2025-11-01T00:39:57.097872Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:39:57.098624 waagent[1590]: 2025-11-01T00:39:57.098560Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 1 00:39:57.098866 waagent[1590]: 2025-11-01T00:39:57.098811Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 1 00:39:57.099126 waagent[1590]: 2025-11-01T00:39:57.099061Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:39:57.099265 waagent[1590]: 2025-11-01T00:39:57.099209Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 1 00:39:57.099265 waagent[1590]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 1 00:39:57.099265 waagent[1590]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Nov 1 00:39:57.099265 waagent[1590]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 1 00:39:57.099265 waagent[1590]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:39:57.099265 waagent[1590]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:39:57.099265 waagent[1590]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:39:57.100042 waagent[1590]: 2025-11-01T00:39:57.099970Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 1 00:39:57.103534 waagent[1590]: 2025-11-01T00:39:57.103311Z INFO EnvHandler ExtHandler Configure routes Nov 1 00:39:57.107698 waagent[1590]: 2025-11-01T00:39:57.107647Z INFO EnvHandler ExtHandler Gateway:None Nov 1 00:39:57.108797 waagent[1590]: 2025-11-01T00:39:57.108746Z INFO EnvHandler ExtHandler Routes:None Nov 1 00:39:57.118447 waagent[1590]: 2025-11-01T00:39:57.118392Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Nov 1 00:39:57.118993 waagent[1590]: 2025-11-01T00:39:57.118934Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Nov 1 00:39:57.119795 waagent[1590]: 2025-11-01T00:39:57.119737Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Nov 1 00:39:57.156605 waagent[1590]: 2025-11-01T00:39:57.156505Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Nov 1 00:39:57.167882 waagent[1590]: 2025-11-01T00:39:57.167812Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1584' Nov 1 00:39:57.667934 waagent[1590]: 2025-11-01T00:39:57.667785Z INFO MonitorHandler ExtHandler Network interfaces: Nov 1 00:39:57.667934 waagent[1590]: Executing ['ip', '-a', '-o', 'link']: Nov 1 00:39:57.667934 waagent[1590]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 1 00:39:57.667934 waagent[1590]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:04:78:0c brd ff:ff:ff:ff:ff:ff Nov 1 00:39:57.667934 waagent[1590]: 3: enP9179s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:04:78:0c brd ff:ff:ff:ff:ff:ff\ altname enP9179p0s2 Nov 1 00:39:57.667934 waagent[1590]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 1 00:39:57.667934 waagent[1590]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 1 00:39:57.667934 waagent[1590]: 2: eth0 inet 10.200.4.33/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 1 00:39:57.667934 waagent[1590]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 1 00:39:57.667934 waagent[1590]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Nov 1 00:39:57.667934 waagent[1590]: 2: eth0 inet6 fe80::7e1e:52ff:fe04:780c/64 scope link \ valid_lft forever preferred_lft forever Nov 1 00:39:57.952045 waagent[1590]: 2025-11-01T00:39:57.951916Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Nov 1 00:39:57.959174 waagent[1590]: 2025-11-01T00:39:57.959112Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.15.0.1 -- exiting Nov 1 00:39:57.959760 waagent[1590]: 2025-11-01T00:39:57.959697Z INFO EnvHandler ExtHandler Firewall rules: Nov 1 00:39:57.959760 waagent[1590]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:39:57.959760 waagent[1590]: pkts bytes target prot opt in out source destination Nov 1 00:39:57.959760 waagent[1590]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:39:57.959760 waagent[1590]: pkts bytes target prot opt in out source destination Nov 1 00:39:57.959760 waagent[1590]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:39:57.959760 waagent[1590]: pkts bytes target prot opt in out source destination Nov 1 00:39:57.959760 waagent[1590]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 1 00:39:57.959760 waagent[1590]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 1 00:39:58.567063 waagent[1525]: 2025-11-01T00:39:58.566718Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Nov 1 00:39:58.573458 waagent[1525]: 2025-11-01T00:39:58.573393Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.15.0.1 to be the latest agent Nov 1 00:39:59.267674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:39:59.267935 systemd[1]: Stopped kubelet.service. Nov 1 00:39:59.268005 systemd[1]: kubelet.service: Consumed 1.079s CPU time. Nov 1 00:39:59.269895 systemd[1]: Starting kubelet.service... Nov 1 00:39:59.627342 systemd[1]: Started kubelet.service. Nov 1 00:39:59.712031 waagent[1628]: 2025-11-01T00:39:59.711923Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.15.0.1) Nov 1 00:39:59.712781 waagent[1628]: 2025-11-01T00:39:59.712705Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Nov 1 00:39:59.712945 waagent[1628]: 2025-11-01T00:39:59.712887Z INFO ExtHandler ExtHandler Python: 3.9.16 Nov 1 00:39:59.713145 waagent[1628]: 2025-11-01T00:39:59.713093Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Nov 1 00:39:59.728925 waagent[1628]: 2025-11-01T00:39:59.728836Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 1 00:39:59.729359 waagent[1628]: 2025-11-01T00:39:59.729300Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:39:59.729551 waagent[1628]: 2025-11-01T00:39:59.729499Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:39:59.729810 waagent[1628]: 2025-11-01T00:39:59.729744Z INFO ExtHandler ExtHandler Initializing the goal state... Nov 1 00:39:59.743025 waagent[1628]: 2025-11-01T00:39:59.742951Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 1 00:39:59.752030 waagent[1628]: 2025-11-01T00:39:59.751960Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Nov 1 00:39:59.752956 waagent[1628]: 2025-11-01T00:39:59.752896Z INFO ExtHandler Nov 1 00:39:59.753148 waagent[1628]: 2025-11-01T00:39:59.753095Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4caec6d8-732c-4ec7-b5ee-5a96d0618dcb eTag: 18063216063919623911 source: Fabric] Nov 1 00:39:59.753892 waagent[1628]: 2025-11-01T00:39:59.753827Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 1 00:39:59.755065 waagent[1628]: 2025-11-01T00:39:59.755006Z INFO ExtHandler Nov 1 00:39:59.755235 waagent[1628]: 2025-11-01T00:39:59.755183Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 1 00:39:59.764495 waagent[1628]: 2025-11-01T00:39:59.764444Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 1 00:39:59.764994 waagent[1628]: 2025-11-01T00:39:59.764927Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Nov 1 00:39:59.786443 waagent[1628]: 2025-11-01T00:39:59.786387Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Nov 1 00:40:00.071078 kubelet[1635]: E1101 00:40:00.071027 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:40:00.074232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:40:00.074416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:40:00.123371 waagent[1628]: 2025-11-01T00:40:00.123248Z INFO ExtHandler Downloaded certificate {'thumbprint': '078933A43F22EA4617E974A3E54547C49A55B49A', 'hasPrivateKey': True} Nov 1 00:40:00.124634 waagent[1628]: 2025-11-01T00:40:00.124560Z INFO ExtHandler Fetch goal state from WireServer completed Nov 1 00:40:00.125522 waagent[1628]: 2025-11-01T00:40:00.125459Z INFO ExtHandler ExtHandler Goal state initialization completed. Nov 1 00:40:00.143054 waagent[1628]: 2025-11-01T00:40:00.142948Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Nov 1 00:40:00.150824 waagent[1628]: 2025-11-01T00:40:00.150735Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Nov 1 00:40:00.154436 waagent[1628]: 2025-11-01T00:40:00.154347Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Nov 1 00:40:00.154649 waagent[1628]: 2025-11-01T00:40:00.154595Z INFO ExtHandler ExtHandler Checking state of the firewall Nov 1 00:40:00.186746 waagent[1628]: 2025-11-01T00:40:00.186636Z WARNING ExtHandler ExtHandler The firewall rules for Azure Fabric are not setup correctly (the environment thread will fix it): The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Current state: Nov 1 00:40:00.186746 waagent[1628]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:40:00.186746 waagent[1628]: pkts bytes target prot opt in out source destination Nov 1 00:40:00.186746 waagent[1628]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:40:00.186746 waagent[1628]: pkts bytes target prot opt in out source destination Nov 1 00:40:00.186746 waagent[1628]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:40:00.186746 waagent[1628]: pkts bytes target prot opt in out source destination Nov 1 00:40:00.186746 waagent[1628]: 55 7871 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 1 00:40:00.186746 waagent[1628]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 1 00:40:00.187874 waagent[1628]: 2025-11-01T00:40:00.187809Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Nov 1 00:40:00.190525 waagent[1628]: 2025-11-01T00:40:00.190423Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Nov 1 00:40:00.190946 waagent[1628]: 2025-11-01T00:40:00.190887Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up /lib/systemd/system/waagent-network-setup.service Nov 1 00:40:00.191332 waagent[1628]: 2025-11-01T00:40:00.191273Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 1 00:40:00.199461 waagent[1628]: 2025-11-01T00:40:00.199404Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 1 00:40:00.199927 waagent[1628]: 2025-11-01T00:40:00.199871Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Nov 1 00:40:00.207230 waagent[1628]: 2025-11-01T00:40:00.207163Z INFO ExtHandler ExtHandler WALinuxAgent-2.15.0.1 running as process 1628 Nov 1 00:40:00.210322 waagent[1628]: 2025-11-01T00:40:00.210261Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Nov 1 00:40:00.211074 waagent[1628]: 2025-11-01T00:40:00.211016Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Nov 1 00:40:00.211893 waagent[1628]: 2025-11-01T00:40:00.211834Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 1 00:40:00.214452 waagent[1628]: 2025-11-01T00:40:00.214391Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Nov 1 00:40:00.214772 waagent[1628]: 2025-11-01T00:40:00.214715Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 1 00:40:00.216766 waagent[1628]: 2025-11-01T00:40:00.216707Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 1 00:40:00.217806 waagent[1628]: 2025-11-01T00:40:00.217747Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 1 00:40:00.218069 waagent[1628]: 2025-11-01T00:40:00.218017Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:40:00.218598 waagent[1628]: 2025-11-01T00:40:00.218546Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:40:00.218787 waagent[1628]: 2025-11-01T00:40:00.218721Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:40:00.219633 waagent[1628]: 2025-11-01T00:40:00.219579Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 1 00:40:00.219866 waagent[1628]: 2025-11-01T00:40:00.219811Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:40:00.220601 waagent[1628]: 2025-11-01T00:40:00.220547Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 1 00:40:00.220922 waagent[1628]: 2025-11-01T00:40:00.220868Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 1 00:40:00.220922 waagent[1628]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 1 00:40:00.220922 waagent[1628]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Nov 1 00:40:00.220922 waagent[1628]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 1 00:40:00.220922 waagent[1628]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:40:00.220922 waagent[1628]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:40:00.220922 waagent[1628]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:40:00.221235 waagent[1628]: 2025-11-01T00:40:00.221182Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 1 00:40:00.221806 waagent[1628]: 2025-11-01T00:40:00.221754Z INFO EnvHandler ExtHandler Configure routes Nov 1 00:40:00.224996 waagent[1628]: 2025-11-01T00:40:00.224874Z INFO EnvHandler ExtHandler Gateway:None Nov 1 00:40:00.225556 waagent[1628]: 2025-11-01T00:40:00.225495Z INFO EnvHandler ExtHandler Routes:None Nov 1 00:40:00.227470 waagent[1628]: 2025-11-01T00:40:00.227382Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 1 00:40:00.229656 waagent[1628]: 2025-11-01T00:40:00.229536Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 1 00:40:00.230022 waagent[1628]: 2025-11-01T00:40:00.229952Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 1 00:40:00.256059 waagent[1628]: 2025-11-01T00:40:00.255969Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Nov 1 00:40:00.256498 waagent[1628]: 2025-11-01T00:40:00.256438Z INFO MonitorHandler ExtHandler Network interfaces: Nov 1 00:40:00.256498 waagent[1628]: Executing ['ip', '-a', '-o', 'link']: Nov 1 00:40:00.256498 waagent[1628]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 1 00:40:00.256498 waagent[1628]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:04:78:0c brd ff:ff:ff:ff:ff:ff Nov 1 00:40:00.256498 waagent[1628]: 3: enP9179s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:04:78:0c brd ff:ff:ff:ff:ff:ff\ altname enP9179p0s2 Nov 1 00:40:00.256498 waagent[1628]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 1 00:40:00.256498 waagent[1628]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 1 00:40:00.256498 waagent[1628]: 2: eth0 inet 10.200.4.33/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 1 00:40:00.256498 waagent[1628]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 1 00:40:00.256498 waagent[1628]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Nov 1 00:40:00.256498 waagent[1628]: 2: eth0 inet6 fe80::7e1e:52ff:fe04:780c/64 scope link \ valid_lft forever preferred_lft forever Nov 1 00:40:00.267282 waagent[1628]: 2025-11-01T00:40:00.267213Z INFO ExtHandler ExtHandler Downloading agent manifest Nov 1 00:40:00.298731 waagent[1628]: 2025-11-01T00:40:00.298671Z INFO ExtHandler ExtHandler Nov 1 00:40:00.303706 waagent[1628]: 2025-11-01T00:40:00.303588Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: dc032196-5c75-4a1c-a4ff-fdfd8437ce79 correlation a01c3457-a4fe-4e2c-aaa3-e25bd276cccb created: 2025-11-01T00:38:38.386559Z] Nov 1 00:40:00.305185 waagent[1628]: 2025-11-01T00:40:00.305102Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 1 00:40:00.306868 waagent[1628]: 2025-11-01T00:40:00.306810Z WARNING EnvHandler ExtHandler The firewall is not configured correctly. The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Will reset it. Current state: Nov 1 00:40:00.306868 waagent[1628]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:40:00.306868 waagent[1628]: pkts bytes target prot opt in out source destination Nov 1 00:40:00.306868 waagent[1628]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:40:00.306868 waagent[1628]: pkts bytes target prot opt in out source destination Nov 1 00:40:00.306868 waagent[1628]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:40:00.306868 waagent[1628]: pkts bytes target prot opt in out source destination Nov 1 00:40:00.306868 waagent[1628]: 76 10379 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 1 00:40:00.306868 waagent[1628]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 1 00:40:00.309524 waagent[1628]: 2025-11-01T00:40:00.309467Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 10 ms] Nov 1 00:40:00.362489 waagent[1628]: 2025-11-01T00:40:00.362332Z INFO EnvHandler ExtHandler The firewall was setup successfully: Nov 1 00:40:00.362489 waagent[1628]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:40:00.362489 waagent[1628]: pkts bytes target prot opt in out source destination Nov 1 00:40:00.362489 waagent[1628]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:40:00.362489 waagent[1628]: pkts bytes target prot opt in out source destination Nov 1 00:40:00.362489 waagent[1628]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:40:00.362489 waagent[1628]: pkts bytes target prot opt in out source destination Nov 1 00:40:00.362489 waagent[1628]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 1 00:40:00.362489 waagent[1628]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 1 00:40:00.362489 waagent[1628]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 1 00:40:00.363786 waagent[1628]: 2025-11-01T00:40:00.363724Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 1 00:40:01.384889 waagent[1628]: 2025-11-01T00:40:01.384807Z INFO ExtHandler ExtHandler Looking for existing remote access users. Nov 1 00:40:01.387552 waagent[1628]: 2025-11-01T00:40:01.387461Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.15.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 82F35422-9EAD-40BB-85A6-38E04C643794;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Nov 1 00:40:10.267610 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:40:10.267922 systemd[1]: Stopped kubelet.service. Nov 1 00:40:10.270058 systemd[1]: Starting kubelet.service... Nov 1 00:40:10.827286 systemd[1]: Started kubelet.service. Nov 1 00:40:11.126130 kubelet[1688]: E1101 00:40:11.126013 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:40:11.128027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:40:11.128192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:40:21.267620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 00:40:21.267938 systemd[1]: Stopped kubelet.service. Nov 1 00:40:21.270017 systemd[1]: Starting kubelet.service... Nov 1 00:40:21.648155 systemd[1]: Started kubelet.service. Nov 1 00:40:22.020447 kubelet[1698]: E1101 00:40:22.020394 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:40:22.022205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:40:22.022364 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:40:24.898708 systemd[1]: Created slice system-sshd.slice. Nov 1 00:40:24.900826 systemd[1]: Started sshd@0-10.200.4.33:22-10.200.16.10:49156.service. Nov 1 00:40:25.682356 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 1 00:40:25.764082 sshd[1704]: Accepted publickey for core from 10.200.16.10 port 49156 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:40:25.765768 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:25.770303 systemd-logind[1429]: New session 3 of user core. Nov 1 00:40:25.771870 systemd[1]: Started session-3.scope. Nov 1 00:40:26.279075 systemd[1]: Started sshd@1-10.200.4.33:22-10.200.16.10:49172.service. Nov 1 00:40:26.870765 sshd[1709]: Accepted publickey for core from 10.200.16.10 port 49172 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:40:26.872509 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:26.877441 systemd[1]: Started session-4.scope. Nov 1 00:40:26.877887 systemd-logind[1429]: New session 4 of user core. Nov 1 00:40:27.294022 sshd[1709]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:27.297392 systemd[1]: sshd@1-10.200.4.33:22-10.200.16.10:49172.service: Deactivated successfully. Nov 1 00:40:27.298430 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:40:27.299238 systemd-logind[1429]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:40:27.300184 systemd-logind[1429]: Removed session 4. Nov 1 00:40:27.393500 systemd[1]: Started sshd@2-10.200.4.33:22-10.200.16.10:49182.service. Nov 1 00:40:27.986385 sshd[1715]: Accepted publickey for core from 10.200.16.10 port 49182 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:40:27.988113 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:27.993245 systemd[1]: Started session-5.scope. Nov 1 00:40:27.993835 systemd-logind[1429]: New session 5 of user core. Nov 1 00:40:28.403558 sshd[1715]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:28.406854 systemd[1]: sshd@2-10.200.4.33:22-10.200.16.10:49182.service: Deactivated successfully. Nov 1 00:40:28.407860 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:40:28.408645 systemd-logind[1429]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:40:28.409563 systemd-logind[1429]: Removed session 5. Nov 1 00:40:28.502018 systemd[1]: Started sshd@3-10.200.4.33:22-10.200.16.10:49188.service. Nov 1 00:40:29.094724 sshd[1721]: Accepted publickey for core from 10.200.16.10 port 49188 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:40:29.096428 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:29.101185 systemd[1]: Started session-6.scope. Nov 1 00:40:29.101780 systemd-logind[1429]: New session 6 of user core. Nov 1 00:40:29.516407 sshd[1721]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:29.519605 systemd[1]: sshd@3-10.200.4.33:22-10.200.16.10:49188.service: Deactivated successfully. Nov 1 00:40:29.520433 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:40:29.521046 systemd-logind[1429]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:40:29.521797 systemd-logind[1429]: Removed session 6. Nov 1 00:40:29.615511 systemd[1]: Started sshd@4-10.200.4.33:22-10.200.16.10:49198.service. Nov 1 00:40:30.204300 sshd[1727]: Accepted publickey for core from 10.200.16.10 port 49198 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:40:30.206004 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:30.211726 systemd[1]: Started session-7.scope. Nov 1 00:40:30.212502 systemd-logind[1429]: New session 7 of user core. Nov 1 00:40:30.782756 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:40:30.783147 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:40:30.806467 systemd[1]: Starting docker.service... Nov 1 00:40:30.839766 env[1740]: time="2025-11-01T00:40:30.839718405Z" level=info msg="Starting up" Nov 1 00:40:30.842385 env[1740]: time="2025-11-01T00:40:30.842134508Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:40:30.842489 env[1740]: time="2025-11-01T00:40:30.842452609Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:40:30.842545 env[1740]: time="2025-11-01T00:40:30.842495809Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:40:30.842545 env[1740]: time="2025-11-01T00:40:30.842513409Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:40:30.844446 env[1740]: time="2025-11-01T00:40:30.844420012Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:40:30.844446 env[1740]: time="2025-11-01T00:40:30.844437412Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:40:30.844599 env[1740]: time="2025-11-01T00:40:30.844453212Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:40:30.844599 env[1740]: time="2025-11-01T00:40:30.844463812Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:40:30.850629 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2376324415-merged.mount: Deactivated successfully. Nov 1 00:40:30.952969 env[1740]: time="2025-11-01T00:40:30.952917467Z" level=info msg="Loading containers: start." Nov 1 00:40:31.119010 kernel: Initializing XFRM netlink socket Nov 1 00:40:31.155312 env[1740]: time="2025-11-01T00:40:31.155271243Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:40:31.268852 systemd-networkd[1584]: docker0: Link UP Nov 1 00:40:31.291297 env[1740]: time="2025-11-01T00:40:31.291256126Z" level=info msg="Loading containers: done." Nov 1 00:40:31.303090 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3736552208-merged.mount: Deactivated successfully. Nov 1 00:40:31.313729 env[1740]: time="2025-11-01T00:40:31.313692956Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:40:31.313917 env[1740]: time="2025-11-01T00:40:31.313890756Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:40:31.314056 env[1740]: time="2025-11-01T00:40:31.314034056Z" level=info msg="Daemon has completed initialization" Nov 1 00:40:31.341007 systemd[1]: Started docker.service. Nov 1 00:40:31.348735 env[1740]: time="2025-11-01T00:40:31.348679403Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:40:32.267487 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 1 00:40:32.267781 systemd[1]: Stopped kubelet.service. Nov 1 00:40:32.269749 systemd[1]: Starting kubelet.service... Nov 1 00:40:32.456968 systemd[1]: Started kubelet.service. Nov 1 00:40:32.493388 kubelet[1859]: E1101 00:40:32.493333 1859 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:40:32.495186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:40:32.495349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:40:32.670652 update_engine[1430]: I1101 00:40:32.669934 1430 update_attempter.cc:509] Updating boot flags... Nov 1 00:40:33.830555 env[1441]: time="2025-11-01T00:40:33.830502217Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 1 00:40:34.682057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount580302613.mount: Deactivated successfully. Nov 1 00:40:36.696008 env[1441]: time="2025-11-01T00:40:36.695946037Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:36.754775 env[1441]: time="2025-11-01T00:40:36.754715295Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:36.759269 env[1441]: time="2025-11-01T00:40:36.759216299Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:36.763037 env[1441]: time="2025-11-01T00:40:36.762916603Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:36.763993 env[1441]: time="2025-11-01T00:40:36.763944404Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 1 00:40:36.764594 env[1441]: time="2025-11-01T00:40:36.764567804Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 1 00:40:42.517595 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 1 00:40:42.517902 systemd[1]: Stopped kubelet.service. Nov 1 00:40:42.520055 systemd[1]: Starting kubelet.service... Nov 1 00:40:42.617273 systemd[1]: Started kubelet.service. Nov 1 00:40:42.653355 kubelet[1938]: E1101 00:40:42.653322 1938 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:40:42.655073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:40:42.655239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:40:52.214509 env[1441]: time="2025-11-01T00:40:52.214451213Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:52.220416 env[1441]: time="2025-11-01T00:40:52.220379324Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:52.223372 env[1441]: time="2025-11-01T00:40:52.223339530Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:52.226460 env[1441]: time="2025-11-01T00:40:52.226430440Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:52.227101 env[1441]: time="2025-11-01T00:40:52.227068963Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 1 00:40:52.227637 env[1441]: time="2025-11-01T00:40:52.227610582Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 1 00:40:52.767641 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 1 00:40:52.768012 systemd[1]: Stopped kubelet.service. Nov 1 00:40:52.769952 systemd[1]: Starting kubelet.service... Nov 1 00:40:52.864181 systemd[1]: Started kubelet.service. Nov 1 00:40:53.519456 kubelet[1947]: E1101 00:40:53.519405 1947 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:40:53.521946 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:40:53.522127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:40:54.337618 env[1441]: time="2025-11-01T00:40:54.337566269Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:54.342757 env[1441]: time="2025-11-01T00:40:54.342716443Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:54.345844 env[1441]: time="2025-11-01T00:40:54.345810947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:54.349009 env[1441]: time="2025-11-01T00:40:54.348963753Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:54.349674 env[1441]: time="2025-11-01T00:40:54.349641376Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 1 00:40:54.350398 env[1441]: time="2025-11-01T00:40:54.350372101Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 1 00:40:55.658253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236587218.mount: Deactivated successfully. Nov 1 00:40:56.332715 env[1441]: time="2025-11-01T00:40:56.332602470Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:56.340113 env[1441]: time="2025-11-01T00:40:56.340001607Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:56.344170 env[1441]: time="2025-11-01T00:40:56.344076937Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:56.348285 env[1441]: time="2025-11-01T00:40:56.348201768Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:56.348709 env[1441]: time="2025-11-01T00:40:56.348678584Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 1 00:40:56.349377 env[1441]: time="2025-11-01T00:40:56.349350705Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 1 00:40:56.958104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount775687061.mount: Deactivated successfully. Nov 1 00:40:58.473169 env[1441]: time="2025-11-01T00:40:58.473114709Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:58.477607 env[1441]: time="2025-11-01T00:40:58.477567844Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:58.481433 env[1441]: time="2025-11-01T00:40:58.481352358Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:58.486298 env[1441]: time="2025-11-01T00:40:58.486204005Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:58.487160 env[1441]: time="2025-11-01T00:40:58.487128233Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 1 00:40:58.487884 env[1441]: time="2025-11-01T00:40:58.487857055Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:40:59.076896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073666764.mount: Deactivated successfully. Nov 1 00:40:59.093289 env[1441]: time="2025-11-01T00:40:59.093187406Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:59.099447 env[1441]: time="2025-11-01T00:40:59.099336387Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:59.103209 env[1441]: time="2025-11-01T00:40:59.103125098Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:59.106626 env[1441]: time="2025-11-01T00:40:59.106596001Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:59.107049 env[1441]: time="2025-11-01T00:40:59.107020213Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:40:59.107689 env[1441]: time="2025-11-01T00:40:59.107663732Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 1 00:40:59.715009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount246173467.mount: Deactivated successfully. Nov 1 00:41:02.846631 env[1441]: time="2025-11-01T00:41:02.846578213Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:02.852435 env[1441]: time="2025-11-01T00:41:02.852384871Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:02.859690 env[1441]: time="2025-11-01T00:41:02.859648769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:02.863955 env[1441]: time="2025-11-01T00:41:02.863920685Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:02.864685 env[1441]: time="2025-11-01T00:41:02.864652305Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 1 00:41:03.551128 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Nov 1 00:41:03.551425 systemd[1]: Stopped kubelet.service. Nov 1 00:41:03.553248 systemd[1]: Starting kubelet.service... Nov 1 00:41:03.665970 systemd[1]: Started kubelet.service. Nov 1 00:41:03.722426 kubelet[1973]: E1101 00:41:03.722304 1973 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:41:03.724587 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:41:03.724737 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:41:07.547587 systemd[1]: Stopped kubelet.service. Nov 1 00:41:07.550297 systemd[1]: Starting kubelet.service... Nov 1 00:41:07.589512 systemd[1]: Reloading. Nov 1 00:41:07.703127 /usr/lib/systemd/system-generators/torcx-generator[2008]: time="2025-11-01T00:41:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:41:07.703162 /usr/lib/systemd/system-generators/torcx-generator[2008]: time="2025-11-01T00:41:07Z" level=info msg="torcx already run" Nov 1 00:41:07.799235 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:41:07.799256 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:41:07.815899 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:41:08.316217 systemd[1]: Started kubelet.service. Nov 1 00:41:08.318567 systemd[1]: Stopping kubelet.service... Nov 1 00:41:08.318940 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:41:08.319240 systemd[1]: Stopped kubelet.service. Nov 1 00:41:08.320972 systemd[1]: Starting kubelet.service... Nov 1 00:41:08.416096 systemd[1]: Started kubelet.service. Nov 1 00:41:08.455119 kubelet[2079]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:41:08.455119 kubelet[2079]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:41:08.455119 kubelet[2079]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:41:08.455589 kubelet[2079]: I1101 00:41:08.455174 2079 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:41:09.111994 kubelet[2079]: I1101 00:41:09.111938 2079 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 1 00:41:09.111994 kubelet[2079]: I1101 00:41:09.111985 2079 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:41:09.112768 kubelet[2079]: I1101 00:41:09.112744 2079 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:41:09.282941 kubelet[2079]: E1101 00:41:09.282899 2079 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:41:09.283574 kubelet[2079]: I1101 00:41:09.283543 2079 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:41:09.289899 kubelet[2079]: E1101 00:41:09.289869 2079 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:41:09.289899 kubelet[2079]: I1101 00:41:09.289895 2079 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:41:09.294502 kubelet[2079]: I1101 00:41:09.294480 2079 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:41:09.294788 kubelet[2079]: I1101 00:41:09.294739 2079 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:41:09.294988 kubelet[2079]: I1101 00:41:09.294783 2079 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-bb3ab03ab7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:41:09.295149 kubelet[2079]: I1101 00:41:09.294995 2079 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:41:09.295149 kubelet[2079]: I1101 00:41:09.295008 2079 container_manager_linux.go:303] "Creating device plugin manager" Nov 1 00:41:09.295149 kubelet[2079]: I1101 00:41:09.295142 2079 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:41:09.300864 kubelet[2079]: I1101 00:41:09.300823 2079 kubelet.go:480] "Attempting to sync node with API server" Nov 1 00:41:09.300950 kubelet[2079]: I1101 00:41:09.300873 2079 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:41:09.300950 kubelet[2079]: I1101 00:41:09.300905 2079 kubelet.go:386] "Adding apiserver pod source" Nov 1 00:41:09.300950 kubelet[2079]: I1101 00:41:09.300925 2079 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:41:09.328347 kubelet[2079]: E1101 00:41:09.328313 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-bb3ab03ab7&limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:41:09.329421 kubelet[2079]: E1101 00:41:09.329292 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:41:09.329532 kubelet[2079]: I1101 00:41:09.329510 2079 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:41:09.330248 kubelet[2079]: I1101 00:41:09.330224 2079 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:41:09.331114 kubelet[2079]: W1101 00:41:09.331090 2079 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:41:09.334563 kubelet[2079]: I1101 00:41:09.334543 2079 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:41:09.334647 kubelet[2079]: I1101 00:41:09.334616 2079 server.go:1289] "Started kubelet" Nov 1 00:41:09.341568 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Nov 1 00:41:09.347427 kubelet[2079]: I1101 00:41:09.347348 2079 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:41:09.350179 kubelet[2079]: I1101 00:41:09.350153 2079 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:41:09.350498 kubelet[2079]: I1101 00:41:09.350437 2079 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:41:09.350803 kubelet[2079]: I1101 00:41:09.350782 2079 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:41:09.355085 kubelet[2079]: I1101 00:41:09.355059 2079 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:41:09.358848 kubelet[2079]: I1101 00:41:09.358819 2079 server.go:317] "Adding debug handlers to kubelet server" Nov 1 00:41:09.361767 kubelet[2079]: I1101 00:41:09.361735 2079 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:41:09.362205 kubelet[2079]: E1101 00:41:09.362127 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" Nov 1 00:41:09.363077 kubelet[2079]: I1101 00:41:09.363061 2079 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:41:09.363247 kubelet[2079]: I1101 00:41:09.363223 2079 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:41:09.365257 kubelet[2079]: E1101 00:41:09.364478 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:41:09.365257 kubelet[2079]: E1101 00:41:09.364555 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-bb3ab03ab7?timeout=10s\": dial tcp 10.200.4.33:6443: connect: connection refused" interval="200ms" Nov 1 00:41:09.366521 kubelet[2079]: E1101 00:41:09.346523 2079 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.33:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.33:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-bb3ab03ab7.1873bb2a3df6350d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-bb3ab03ab7,UID:ci-3510.3.8-n-bb3ab03ab7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-bb3ab03ab7,},FirstTimestamp:2025-11-01 00:41:09.334562061 +0000 UTC m=+0.911723158,LastTimestamp:2025-11-01 00:41:09.334562061 +0000 UTC m=+0.911723158,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-bb3ab03ab7,}" Nov 1 00:41:09.366673 kubelet[2079]: I1101 00:41:09.366638 2079 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:41:09.366673 kubelet[2079]: I1101 00:41:09.366653 2079 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:41:09.366776 kubelet[2079]: I1101 00:41:09.366717 2079 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:41:09.373481 kubelet[2079]: E1101 00:41:09.373458 2079 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:41:09.427290 kubelet[2079]: I1101 00:41:09.427229 2079 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 1 00:41:09.429627 kubelet[2079]: I1101 00:41:09.429599 2079 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 1 00:41:09.429627 kubelet[2079]: I1101 00:41:09.429627 2079 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 1 00:41:09.429779 kubelet[2079]: I1101 00:41:09.429652 2079 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:41:09.429779 kubelet[2079]: I1101 00:41:09.429661 2079 kubelet.go:2436] "Starting kubelet main sync loop" Nov 1 00:41:09.429779 kubelet[2079]: E1101 00:41:09.429710 2079 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:41:09.431485 kubelet[2079]: E1101 00:41:09.431454 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:41:09.456262 kubelet[2079]: I1101 00:41:09.456240 2079 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:41:09.456610 kubelet[2079]: I1101 00:41:09.456278 2079 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:41:09.456610 kubelet[2079]: I1101 00:41:09.456301 2079 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:41:09.461868 kubelet[2079]: I1101 00:41:09.461845 2079 policy_none.go:49] "None policy: Start" Nov 1 00:41:09.461868 kubelet[2079]: I1101 00:41:09.461868 2079 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:41:09.462013 kubelet[2079]: I1101 00:41:09.461881 2079 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:41:09.462917 kubelet[2079]: E1101 00:41:09.462888 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" Nov 1 00:41:09.470631 systemd[1]: Created slice kubepods.slice. Nov 1 00:41:09.475225 systemd[1]: Created slice kubepods-burstable.slice. Nov 1 00:41:09.478328 systemd[1]: Created slice kubepods-besteffort.slice. Nov 1 00:41:09.485329 kubelet[2079]: E1101 00:41:09.485302 2079 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:41:09.485465 kubelet[2079]: I1101 00:41:09.485447 2079 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:41:09.485545 kubelet[2079]: I1101 00:41:09.485469 2079 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:41:09.486162 kubelet[2079]: I1101 00:41:09.486147 2079 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:41:09.487202 kubelet[2079]: E1101 00:41:09.487137 2079 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:41:09.487306 kubelet[2079]: E1101 00:41:09.487288 2079 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-bb3ab03ab7\" not found" Nov 1 00:41:09.542476 systemd[1]: Created slice kubepods-burstable-podaf80eab214881172e6d0e2eec1bcd1b6.slice. Nov 1 00:41:09.550663 kubelet[2079]: E1101 00:41:09.550636 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:09.554281 systemd[1]: Created slice kubepods-burstable-pod6be4e25bb58441076c33e94e258d0664.slice. Nov 1 00:41:09.556949 kubelet[2079]: E1101 00:41:09.556924 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:09.558948 systemd[1]: Created slice kubepods-burstable-pod5e95854799a25af01c66241d354d3da2.slice. Nov 1 00:41:09.560695 kubelet[2079]: E1101 00:41:09.560669 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:09.566210 kubelet[2079]: E1101 00:41:09.566127 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-bb3ab03ab7?timeout=10s\": dial tcp 10.200.4.33:6443: connect: connection refused" interval="400ms" Nov 1 00:41:09.587813 kubelet[2079]: I1101 00:41:09.587777 2079 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:09.588202 kubelet[2079]: E1101 00:41:09.588152 2079 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.33:6443/api/v1/nodes\": dial tcp 10.200.4.33:6443: connect: connection refused" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:09.666368 kubelet[2079]: I1101 00:41:09.664846 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af80eab214881172e6d0e2eec1bcd1b6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"af80eab214881172e6d0e2eec1bcd1b6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:09.666368 kubelet[2079]: I1101 00:41:09.665064 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6be4e25bb58441076c33e94e258d0664-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"6be4e25bb58441076c33e94e258d0664\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:09.666368 kubelet[2079]: I1101 00:41:09.665137 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e95854799a25af01c66241d354d3da2-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"5e95854799a25af01c66241d354d3da2\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:09.666368 kubelet[2079]: I1101 00:41:09.665166 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af80eab214881172e6d0e2eec1bcd1b6-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"af80eab214881172e6d0e2eec1bcd1b6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:09.666368 kubelet[2079]: I1101 00:41:09.665192 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af80eab214881172e6d0e2eec1bcd1b6-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"af80eab214881172e6d0e2eec1bcd1b6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:09.666663 kubelet[2079]: I1101 00:41:09.665234 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6be4e25bb58441076c33e94e258d0664-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"6be4e25bb58441076c33e94e258d0664\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:09.666663 kubelet[2079]: I1101 00:41:09.665263 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6be4e25bb58441076c33e94e258d0664-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"6be4e25bb58441076c33e94e258d0664\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:09.666663 kubelet[2079]: I1101 00:41:09.665305 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6be4e25bb58441076c33e94e258d0664-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"6be4e25bb58441076c33e94e258d0664\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:09.666663 kubelet[2079]: I1101 00:41:09.665331 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6be4e25bb58441076c33e94e258d0664-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"6be4e25bb58441076c33e94e258d0664\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:09.790168 kubelet[2079]: I1101 00:41:09.790126 2079 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:09.790471 kubelet[2079]: E1101 00:41:09.790442 2079 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.33:6443/api/v1/nodes\": dial tcp 10.200.4.33:6443: connect: connection refused" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:09.852190 env[1441]: time="2025-11-01T00:41:09.852144324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-bb3ab03ab7,Uid:af80eab214881172e6d0e2eec1bcd1b6,Namespace:kube-system,Attempt:0,}" Nov 1 00:41:09.857612 env[1441]: time="2025-11-01T00:41:09.857573947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7,Uid:6be4e25bb58441076c33e94e258d0664,Namespace:kube-system,Attempt:0,}" Nov 1 00:41:09.862539 env[1441]: time="2025-11-01T00:41:09.862502659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-bb3ab03ab7,Uid:5e95854799a25af01c66241d354d3da2,Namespace:kube-system,Attempt:0,}" Nov 1 00:41:09.967303 kubelet[2079]: E1101 00:41:09.967147 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-bb3ab03ab7?timeout=10s\": dial tcp 10.200.4.33:6443: connect: connection refused" interval="800ms" Nov 1 00:41:10.192272 kubelet[2079]: I1101 00:41:10.192231 2079 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:10.192661 kubelet[2079]: E1101 00:41:10.192626 2079 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.33:6443/api/v1/nodes\": dial tcp 10.200.4.33:6443: connect: connection refused" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:10.257562 kubelet[2079]: E1101 00:41:10.257470 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:41:10.279703 kubelet[2079]: E1101 00:41:10.279664 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:41:10.450581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1730276575.mount: Deactivated successfully. Nov 1 00:41:10.555877 env[1441]: time="2025-11-01T00:41:10.555827506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:10.558889 env[1441]: time="2025-11-01T00:41:10.558852373Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:10.567845 env[1441]: time="2025-11-01T00:41:10.567807271Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:10.571353 env[1441]: time="2025-11-01T00:41:10.571325049Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:10.592990 env[1441]: time="2025-11-01T00:41:10.592926328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:10.596655 env[1441]: time="2025-11-01T00:41:10.596626210Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:10.702628 env[1441]: time="2025-11-01T00:41:10.702576058Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:10.768863 kubelet[2079]: E1101 00:41:10.768685 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-bb3ab03ab7&limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:41:10.769285 kubelet[2079]: E1101 00:41:10.768877 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-bb3ab03ab7?timeout=10s\": dial tcp 10.200.4.33:6443: connect: connection refused" interval="1.6s" Nov 1 00:41:10.784563 kubelet[2079]: E1101 00:41:10.784518 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:41:10.995612 kubelet[2079]: I1101 00:41:10.995036 2079 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:10.995612 kubelet[2079]: E1101 00:41:10.995437 2079 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.33:6443/api/v1/nodes\": dial tcp 10.200.4.33:6443: connect: connection refused" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:11.331229 kubelet[2079]: E1101 00:41:11.331185 2079 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:41:12.369804 kubelet[2079]: E1101 00:41:12.369760 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-bb3ab03ab7?timeout=10s\": dial tcp 10.200.4.33:6443: connect: connection refused" interval="3.2s" Nov 1 00:41:12.500787 env[1441]: time="2025-11-01T00:41:12.500717828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:12.597927 kubelet[2079]: I1101 00:41:12.597646 2079 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:12.598158 kubelet[2079]: E1101 00:41:12.598024 2079 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.33:6443/api/v1/nodes\": dial tcp 10.200.4.33:6443: connect: connection refused" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:12.940346 kubelet[2079]: E1101 00:41:12.940293 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-bb3ab03ab7&limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:41:13.218504 kubelet[2079]: E1101 00:41:13.218381 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:41:13.382821 kubelet[2079]: E1101 00:41:13.382772 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:41:13.842314 env[1441]: time="2025-11-01T00:41:13.842251091Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:13.849745 env[1441]: time="2025-11-01T00:41:13.849687844Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:13.854125 env[1441]: time="2025-11-01T00:41:13.854092535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:13.875894 env[1441]: time="2025-11-01T00:41:13.875859682Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:13.901149 kubelet[2079]: E1101 00:41:13.900792 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.33:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:41:13.911396 env[1441]: time="2025-11-01T00:41:13.909300571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:41:13.911396 env[1441]: time="2025-11-01T00:41:13.909341671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:41:13.911396 env[1441]: time="2025-11-01T00:41:13.909355972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:41:13.911396 env[1441]: time="2025-11-01T00:41:13.909509775Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/58a08d5f6fad6b89146ed161bd4aadff5a530b8de2110b27302ce97808e12447 pid=2123 runtime=io.containerd.runc.v2 Nov 1 00:41:13.920901 env[1441]: time="2025-11-01T00:41:13.920839308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:41:13.921071 env[1441]: time="2025-11-01T00:41:13.921045312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:41:13.921188 env[1441]: time="2025-11-01T00:41:13.921166615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:41:13.921496 env[1441]: time="2025-11-01T00:41:13.921413820Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c8776fb71a47a7b0a6a2c370ba9ecbb28bfb3029fef7bfcfe081505c12a4c8a pid=2138 runtime=io.containerd.runc.v2 Nov 1 00:41:13.953472 systemd[1]: Started cri-containerd-8c8776fb71a47a7b0a6a2c370ba9ecbb28bfb3029fef7bfcfe081505c12a4c8a.scope. Nov 1 00:41:13.966129 env[1441]: time="2025-11-01T00:41:13.965513927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:41:13.966129 env[1441]: time="2025-11-01T00:41:13.965550528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:41:13.966129 env[1441]: time="2025-11-01T00:41:13.965563528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:41:13.966129 env[1441]: time="2025-11-01T00:41:13.965672630Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e70b9322d1747373fc0255d47cda1dc2ca949a5bdc24b8364c8ceb195cff0cc8 pid=2181 runtime=io.containerd.runc.v2 Nov 1 00:41:13.972907 systemd[1]: Started cri-containerd-58a08d5f6fad6b89146ed161bd4aadff5a530b8de2110b27302ce97808e12447.scope. Nov 1 00:41:13.995806 systemd[1]: Started cri-containerd-e70b9322d1747373fc0255d47cda1dc2ca949a5bdc24b8364c8ceb195cff0cc8.scope. Nov 1 00:41:14.038351 env[1441]: time="2025-11-01T00:41:14.038307306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7,Uid:6be4e25bb58441076c33e94e258d0664,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c8776fb71a47a7b0a6a2c370ba9ecbb28bfb3029fef7bfcfe081505c12a4c8a\"" Nov 1 00:41:14.051693 env[1441]: time="2025-11-01T00:41:14.051656674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-bb3ab03ab7,Uid:af80eab214881172e6d0e2eec1bcd1b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"58a08d5f6fad6b89146ed161bd4aadff5a530b8de2110b27302ce97808e12447\"" Nov 1 00:41:14.054930 env[1441]: time="2025-11-01T00:41:14.054870739Z" level=info msg="CreateContainer within sandbox \"8c8776fb71a47a7b0a6a2c370ba9ecbb28bfb3029fef7bfcfe081505c12a4c8a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:41:14.060818 env[1441]: time="2025-11-01T00:41:14.060784758Z" level=info msg="CreateContainer within sandbox \"58a08d5f6fad6b89146ed161bd4aadff5a530b8de2110b27302ce97808e12447\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:41:14.083479 env[1441]: time="2025-11-01T00:41:14.083445813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-bb3ab03ab7,Uid:5e95854799a25af01c66241d354d3da2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e70b9322d1747373fc0255d47cda1dc2ca949a5bdc24b8364c8ceb195cff0cc8\"" Nov 1 00:41:14.090612 env[1441]: time="2025-11-01T00:41:14.090585656Z" level=info msg="CreateContainer within sandbox \"e70b9322d1747373fc0255d47cda1dc2ca949a5bdc24b8364c8ceb195cff0cc8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:41:14.129845 env[1441]: time="2025-11-01T00:41:14.129259932Z" level=info msg="CreateContainer within sandbox \"8c8776fb71a47a7b0a6a2c370ba9ecbb28bfb3029fef7bfcfe081505c12a4c8a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c63a0208842fe52aa9818f1f7e81e5571740d1b40371e8844d9fba472cce6c05\"" Nov 1 00:41:14.130266 env[1441]: time="2025-11-01T00:41:14.130236452Z" level=info msg="StartContainer for \"c63a0208842fe52aa9818f1f7e81e5571740d1b40371e8844d9fba472cce6c05\"" Nov 1 00:41:14.146775 env[1441]: time="2025-11-01T00:41:14.146701083Z" level=info msg="CreateContainer within sandbox \"58a08d5f6fad6b89146ed161bd4aadff5a530b8de2110b27302ce97808e12447\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d36bc1b6920222518d8bd27f6734bc138afbc2dfe994a216fb9ea2d9e1d28e21\"" Nov 1 00:41:14.148594 env[1441]: time="2025-11-01T00:41:14.147312295Z" level=info msg="StartContainer for \"d36bc1b6920222518d8bd27f6734bc138afbc2dfe994a216fb9ea2d9e1d28e21\"" Nov 1 00:41:14.148297 systemd[1]: Started cri-containerd-c63a0208842fe52aa9818f1f7e81e5571740d1b40371e8844d9fba472cce6c05.scope. Nov 1 00:41:14.166790 env[1441]: time="2025-11-01T00:41:14.166740585Z" level=info msg="CreateContainer within sandbox \"e70b9322d1747373fc0255d47cda1dc2ca949a5bdc24b8364c8ceb195cff0cc8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bef32efe2b218fd21f4599d3e5457b0a378e533e420f37ee1f36afab016d6cb1\"" Nov 1 00:41:14.167482 env[1441]: time="2025-11-01T00:41:14.167448899Z" level=info msg="StartContainer for \"bef32efe2b218fd21f4599d3e5457b0a378e533e420f37ee1f36afab016d6cb1\"" Nov 1 00:41:14.185046 systemd[1]: Started cri-containerd-d36bc1b6920222518d8bd27f6734bc138afbc2dfe994a216fb9ea2d9e1d28e21.scope. Nov 1 00:41:14.203757 systemd[1]: Started cri-containerd-bef32efe2b218fd21f4599d3e5457b0a378e533e420f37ee1f36afab016d6cb1.scope. Nov 1 00:41:14.241072 env[1441]: time="2025-11-01T00:41:14.241017376Z" level=info msg="StartContainer for \"c63a0208842fe52aa9818f1f7e81e5571740d1b40371e8844d9fba472cce6c05\" returns successfully" Nov 1 00:41:14.277349 env[1441]: time="2025-11-01T00:41:14.277297305Z" level=info msg="StartContainer for \"d36bc1b6920222518d8bd27f6734bc138afbc2dfe994a216fb9ea2d9e1d28e21\" returns successfully" Nov 1 00:41:14.350867 env[1441]: time="2025-11-01T00:41:14.350810481Z" level=info msg="StartContainer for \"bef32efe2b218fd21f4599d3e5457b0a378e533e420f37ee1f36afab016d6cb1\" returns successfully" Nov 1 00:41:14.443375 kubelet[2079]: E1101 00:41:14.443087 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:14.446714 kubelet[2079]: E1101 00:41:14.446680 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:14.449995 kubelet[2079]: E1101 00:41:14.449163 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:15.452534 kubelet[2079]: E1101 00:41:15.452503 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:15.454249 kubelet[2079]: E1101 00:41:15.454225 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:15.454893 kubelet[2079]: E1101 00:41:15.454652 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:15.800633 kubelet[2079]: I1101 00:41:15.800598 2079 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:16.453557 kubelet[2079]: E1101 00:41:16.453523 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:16.454050 kubelet[2079]: E1101 00:41:16.454028 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:16.515823 kubelet[2079]: E1101 00:41:16.515788 2079 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-bb3ab03ab7\" not found" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:16.579168 kubelet[2079]: I1101 00:41:16.579137 2079 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:16.579364 kubelet[2079]: E1101 00:41:16.579349 2079 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-bb3ab03ab7\": node \"ci-3510.3.8-n-bb3ab03ab7\" not found" Nov 1 00:41:16.639873 kubelet[2079]: E1101 00:41:16.639836 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" Nov 1 00:41:16.650028 kubelet[2079]: E1101 00:41:16.649895 2079 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.8-n-bb3ab03ab7.1873bb2a3df6350d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-bb3ab03ab7,UID:ci-3510.3.8-n-bb3ab03ab7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-bb3ab03ab7,},FirstTimestamp:2025-11-01 00:41:09.334562061 +0000 UTC m=+0.911723158,LastTimestamp:2025-11-01 00:41:09.334562061 +0000 UTC m=+0.911723158,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-bb3ab03ab7,}" Nov 1 00:41:16.741047 kubelet[2079]: E1101 00:41:16.740904 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" Nov 1 00:41:16.760144 kubelet[2079]: E1101 00:41:16.760027 2079 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.8-n-bb3ab03ab7.1873bb2a40478418 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-bb3ab03ab7,UID:ci-3510.3.8-n-bb3ab03ab7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-bb3ab03ab7,},FirstTimestamp:2025-11-01 00:41:09.373445144 +0000 UTC m=+0.950606241,LastTimestamp:2025-11-01 00:41:09.373445144 +0000 UTC m=+0.950606241,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-bb3ab03ab7,}" Nov 1 00:41:16.842349 kubelet[2079]: E1101 00:41:16.842310 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" Nov 1 00:41:16.943276 kubelet[2079]: E1101 00:41:16.943222 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" Nov 1 00:41:17.043934 kubelet[2079]: E1101 00:41:17.043887 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" Nov 1 00:41:17.144536 kubelet[2079]: E1101 00:41:17.144492 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" Nov 1 00:41:17.244994 kubelet[2079]: E1101 00:41:17.244913 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" Nov 1 00:41:17.345514 kubelet[2079]: E1101 00:41:17.345365 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" Nov 1 00:41:17.446507 kubelet[2079]: E1101 00:41:17.446454 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" Nov 1 00:41:17.546733 kubelet[2079]: E1101 00:41:17.546666 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" Nov 1 00:41:17.647073 kubelet[2079]: E1101 00:41:17.646932 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" Nov 1 00:41:17.763342 kubelet[2079]: I1101 00:41:17.763296 2079 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:17.776479 kubelet[2079]: I1101 00:41:17.776450 2079 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:41:17.776655 kubelet[2079]: I1101 00:41:17.776596 2079 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:17.784147 kubelet[2079]: I1101 00:41:17.784123 2079 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:41:17.784434 kubelet[2079]: I1101 00:41:17.784413 2079 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:17.796928 kubelet[2079]: I1101 00:41:17.796888 2079 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:41:18.332100 kubelet[2079]: I1101 00:41:18.332052 2079 apiserver.go:52] "Watching apiserver" Nov 1 00:41:18.364157 kubelet[2079]: I1101 00:41:18.364116 2079 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:41:19.212052 systemd[1]: Reloading. Nov 1 00:41:19.293573 /usr/lib/systemd/system-generators/torcx-generator[2379]: time="2025-11-01T00:41:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:41:19.293605 /usr/lib/systemd/system-generators/torcx-generator[2379]: time="2025-11-01T00:41:19Z" level=info msg="torcx already run" Nov 1 00:41:19.393255 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:41:19.393274 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:41:19.411729 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:41:19.461743 kubelet[2079]: I1101 00:41:19.461684 2079 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-bb3ab03ab7" podStartSLOduration=2.461664493 podStartE2EDuration="2.461664493s" podCreationTimestamp="2025-11-01 00:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:41:19.460134566 +0000 UTC m=+11.037295663" watchObservedRunningTime="2025-11-01 00:41:19.461664493 +0000 UTC m=+11.038825590" Nov 1 00:41:19.494760 kubelet[2079]: I1101 00:41:19.494631 2079 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7" podStartSLOduration=2.49460148 podStartE2EDuration="2.49460148s" podCreationTimestamp="2025-11-01 00:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:41:19.476030249 +0000 UTC m=+11.053191346" watchObservedRunningTime="2025-11-01 00:41:19.49460148 +0000 UTC m=+11.071762477" Nov 1 00:41:19.553653 systemd[1]: Stopping kubelet.service... Nov 1 00:41:19.577421 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:41:19.577638 systemd[1]: Stopped kubelet.service. Nov 1 00:41:19.579665 systemd[1]: Starting kubelet.service... Nov 1 00:41:19.742269 systemd[1]: Started kubelet.service. Nov 1 00:41:19.794719 kubelet[2445]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:41:19.795106 kubelet[2445]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:41:19.795154 kubelet[2445]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:41:19.795272 kubelet[2445]: I1101 00:41:19.795238 2445 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:41:19.800941 kubelet[2445]: I1101 00:41:19.800918 2445 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 1 00:41:19.801089 kubelet[2445]: I1101 00:41:19.801079 2445 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:41:19.801337 kubelet[2445]: I1101 00:41:19.801326 2445 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:41:19.802721 kubelet[2445]: I1101 00:41:19.802704 2445 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 00:41:20.306626 kubelet[2445]: I1101 00:41:20.305853 2445 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:41:20.311754 kubelet[2445]: E1101 00:41:20.311723 2445 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:41:20.311754 kubelet[2445]: I1101 00:41:20.311757 2445 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:41:20.318714 kubelet[2445]: I1101 00:41:20.318690 2445 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:41:20.318941 kubelet[2445]: I1101 00:41:20.318916 2445 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:41:20.319115 kubelet[2445]: I1101 00:41:20.318939 2445 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-bb3ab03ab7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:41:20.319249 kubelet[2445]: I1101 00:41:20.319117 2445 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:41:20.319249 kubelet[2445]: I1101 00:41:20.319130 2445 container_manager_linux.go:303] "Creating device plugin manager" Nov 1 00:41:20.319249 kubelet[2445]: I1101 00:41:20.319183 2445 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:41:20.319442 kubelet[2445]: I1101 00:41:20.319341 2445 kubelet.go:480] "Attempting to sync node with API server" Nov 1 00:41:20.319442 kubelet[2445]: I1101 00:41:20.319357 2445 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:41:20.319442 kubelet[2445]: I1101 00:41:20.319384 2445 kubelet.go:386] "Adding apiserver pod source" Nov 1 00:41:20.319442 kubelet[2445]: I1101 00:41:20.319397 2445 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:41:20.320561 kubelet[2445]: I1101 00:41:20.320542 2445 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:41:20.321237 kubelet[2445]: I1101 00:41:20.321212 2445 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:41:20.326811 kubelet[2445]: I1101 00:41:20.326791 2445 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:41:20.326900 kubelet[2445]: I1101 00:41:20.326837 2445 server.go:1289] "Started kubelet" Nov 1 00:41:20.332803 kubelet[2445]: I1101 00:41:20.330092 2445 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:41:20.342586 kubelet[2445]: I1101 00:41:20.342550 2445 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:41:20.343722 kubelet[2445]: I1101 00:41:20.343687 2445 server.go:317] "Adding debug handlers to kubelet server" Nov 1 00:41:20.350071 kubelet[2445]: I1101 00:41:20.350024 2445 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:41:20.350354 kubelet[2445]: I1101 00:41:20.350335 2445 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:41:20.350688 kubelet[2445]: I1101 00:41:20.350667 2445 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:41:20.352932 kubelet[2445]: I1101 00:41:20.352917 2445 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:41:20.353265 kubelet[2445]: E1101 00:41:20.353248 2445 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-bb3ab03ab7\" not found" Nov 1 00:41:20.353939 kubelet[2445]: I1101 00:41:20.353922 2445 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:41:20.354176 kubelet[2445]: I1101 00:41:20.354161 2445 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:41:20.367941 kubelet[2445]: E1101 00:41:20.367920 2445 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:41:20.368342 kubelet[2445]: I1101 00:41:20.368323 2445 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:41:20.368448 kubelet[2445]: I1101 00:41:20.368437 2445 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:41:20.368604 kubelet[2445]: I1101 00:41:20.368584 2445 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:41:20.402661 kubelet[2445]: I1101 00:41:20.402401 2445 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 1 00:41:20.404506 kubelet[2445]: I1101 00:41:20.404478 2445 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 1 00:41:20.404506 kubelet[2445]: I1101 00:41:20.404505 2445 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 1 00:41:20.404667 kubelet[2445]: I1101 00:41:20.404526 2445 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:41:20.404667 kubelet[2445]: I1101 00:41:20.404540 2445 kubelet.go:2436] "Starting kubelet main sync loop" Nov 1 00:41:20.404667 kubelet[2445]: E1101 00:41:20.404585 2445 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:41:20.427561 kubelet[2445]: I1101 00:41:20.427527 2445 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:41:20.427561 kubelet[2445]: I1101 00:41:20.427544 2445 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:41:20.427561 kubelet[2445]: I1101 00:41:20.427566 2445 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:41:20.427784 kubelet[2445]: I1101 00:41:20.427711 2445 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:41:20.427784 kubelet[2445]: I1101 00:41:20.427725 2445 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:41:20.427784 kubelet[2445]: I1101 00:41:20.427745 2445 policy_none.go:49] "None policy: Start" Nov 1 00:41:20.427784 kubelet[2445]: I1101 00:41:20.427758 2445 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:41:20.427784 kubelet[2445]: I1101 00:41:20.427772 2445 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:41:20.428019 kubelet[2445]: I1101 00:41:20.427881 2445 state_mem.go:75] "Updated machine memory state" Nov 1 00:41:20.431550 kubelet[2445]: E1101 00:41:20.431528 2445 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:41:20.431751 kubelet[2445]: I1101 00:41:20.431730 2445 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:41:20.431830 kubelet[2445]: I1101 00:41:20.431758 2445 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:41:20.432866 kubelet[2445]: I1101 00:41:20.432473 2445 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:41:20.435056 kubelet[2445]: E1101 00:41:20.435027 2445 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:41:20.446387 sudo[2480]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 1 00:41:20.447581 sudo[2480]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Nov 1 00:41:20.505734 kubelet[2445]: I1101 00:41:20.505698 2445 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.507242 kubelet[2445]: I1101 00:41:20.506182 2445 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.507537 kubelet[2445]: I1101 00:41:20.506323 2445 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.518609 kubelet[2445]: I1101 00:41:20.518582 2445 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:41:20.518714 kubelet[2445]: E1101 00:41:20.518642 2445 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-bb3ab03ab7\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.519520 kubelet[2445]: I1101 00:41:20.519495 2445 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:41:20.519600 kubelet[2445]: E1101 00:41:20.519550 2445 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-bb3ab03ab7\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.519658 kubelet[2445]: I1101 00:41:20.519623 2445 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:41:20.519658 kubelet[2445]: E1101 00:41:20.519656 2445 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.534969 kubelet[2445]: I1101 00:41:20.534954 2445 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.549479 kubelet[2445]: I1101 00:41:20.549462 2445 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.549652 kubelet[2445]: I1101 00:41:20.549641 2445 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.656213 kubelet[2445]: I1101 00:41:20.656104 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af80eab214881172e6d0e2eec1bcd1b6-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"af80eab214881172e6d0e2eec1bcd1b6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.656426 kubelet[2445]: I1101 00:41:20.656410 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6be4e25bb58441076c33e94e258d0664-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"6be4e25bb58441076c33e94e258d0664\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.656541 kubelet[2445]: I1101 00:41:20.656520 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6be4e25bb58441076c33e94e258d0664-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"6be4e25bb58441076c33e94e258d0664\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.656646 kubelet[2445]: I1101 00:41:20.656630 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6be4e25bb58441076c33e94e258d0664-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"6be4e25bb58441076c33e94e258d0664\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.656758 kubelet[2445]: I1101 00:41:20.656741 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af80eab214881172e6d0e2eec1bcd1b6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"af80eab214881172e6d0e2eec1bcd1b6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.656861 kubelet[2445]: I1101 00:41:20.656845 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6be4e25bb58441076c33e94e258d0664-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"6be4e25bb58441076c33e94e258d0664\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.656998 kubelet[2445]: I1101 00:41:20.656966 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6be4e25bb58441076c33e94e258d0664-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"6be4e25bb58441076c33e94e258d0664\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.657132 kubelet[2445]: I1101 00:41:20.657115 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e95854799a25af01c66241d354d3da2-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"5e95854799a25af01c66241d354d3da2\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.657228 kubelet[2445]: I1101 00:41:20.657214 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af80eab214881172e6d0e2eec1bcd1b6-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-bb3ab03ab7\" (UID: \"af80eab214881172e6d0e2eec1bcd1b6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-bb3ab03ab7" Nov 1 00:41:20.998164 sudo[2480]: pam_unix(sudo:session): session closed for user root Nov 1 00:41:21.320010 kubelet[2445]: I1101 00:41:21.319968 2445 apiserver.go:52] "Watching apiserver" Nov 1 00:41:21.354990 kubelet[2445]: I1101 00:41:21.354940 2445 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:41:22.716738 sudo[1730]: pam_unix(sudo:session): session closed for user root Nov 1 00:41:22.825084 sshd[1727]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:22.828553 systemd-logind[1429]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:41:22.828782 systemd[1]: sshd@4-10.200.4.33:22-10.200.16.10:49198.service: Deactivated successfully. Nov 1 00:41:22.829713 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:41:22.829913 systemd[1]: session-7.scope: Consumed 6.333s CPU time. Nov 1 00:41:22.830649 systemd-logind[1429]: Removed session 7. Nov 1 00:41:24.886789 kubelet[2445]: I1101 00:41:24.886747 2445 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:41:24.887468 env[1441]: time="2025-11-01T00:41:24.887242080Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:41:24.887862 kubelet[2445]: I1101 00:41:24.887504 2445 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:41:25.842762 systemd[1]: Created slice kubepods-burstable-pod06a79739_2cc5_4e9c_be25_f79ee393a010.slice. Nov 1 00:41:25.854498 systemd[1]: Created slice kubepods-besteffort-pod5efb5317_6556_4147_99ff_0358eda7b8d4.slice. Nov 1 00:41:25.892535 kubelet[2445]: I1101 00:41:25.892498 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-host-proc-sys-kernel\") pod \"cilium-fntg2\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " pod="kube-system/cilium-fntg2" Nov 1 00:41:25.893076 kubelet[2445]: I1101 00:41:25.893049 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5efb5317-6556-4147-99ff-0358eda7b8d4-lib-modules\") pod \"kube-proxy-27rw5\" (UID: \"5efb5317-6556-4147-99ff-0358eda7b8d4\") " pod="kube-system/kube-proxy-27rw5" Nov 1 00:41:25.893233 kubelet[2445]: I1101 00:41:25.893213 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-cni-path\") pod \"cilium-fntg2\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " pod="kube-system/cilium-fntg2" Nov 1 00:41:25.893360 kubelet[2445]: I1101 00:41:25.893340 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4phdb\" (UniqueName: \"kubernetes.io/projected/06a79739-2cc5-4e9c-be25-f79ee393a010-kube-api-access-4phdb\") pod \"cilium-fntg2\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " pod="kube-system/cilium-fntg2" Nov 1 00:41:25.893505 kubelet[2445]: I1101 00:41:25.893485 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-bpf-maps\") pod \"cilium-fntg2\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " pod="kube-system/cilium-fntg2" Nov 1 00:41:25.893659 kubelet[2445]: I1101 00:41:25.893639 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-etc-cni-netd\") pod \"cilium-fntg2\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " pod="kube-system/cilium-fntg2" Nov 1 00:41:25.893783 kubelet[2445]: I1101 00:41:25.893765 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06a79739-2cc5-4e9c-be25-f79ee393a010-cilium-config-path\") pod \"cilium-fntg2\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " pod="kube-system/cilium-fntg2" Nov 1 00:41:25.893905 kubelet[2445]: I1101 00:41:25.893886 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5efb5317-6556-4147-99ff-0358eda7b8d4-xtables-lock\") pod \"kube-proxy-27rw5\" (UID: \"5efb5317-6556-4147-99ff-0358eda7b8d4\") " pod="kube-system/kube-proxy-27rw5" Nov 1 00:41:25.894044 kubelet[2445]: I1101 00:41:25.894025 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz68l\" (UniqueName: \"kubernetes.io/projected/5efb5317-6556-4147-99ff-0358eda7b8d4-kube-api-access-pz68l\") pod \"kube-proxy-27rw5\" (UID: \"5efb5317-6556-4147-99ff-0358eda7b8d4\") " pod="kube-system/kube-proxy-27rw5" Nov 1 00:41:25.894163 kubelet[2445]: I1101 00:41:25.894146 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-cilium-run\") pod \"cilium-fntg2\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " pod="kube-system/cilium-fntg2" Nov 1 00:41:25.894308 kubelet[2445]: I1101 00:41:25.894272 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-xtables-lock\") pod \"cilium-fntg2\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " pod="kube-system/cilium-fntg2" Nov 1 00:41:25.894434 kubelet[2445]: I1101 00:41:25.894418 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-host-proc-sys-net\") pod \"cilium-fntg2\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " pod="kube-system/cilium-fntg2" Nov 1 00:41:25.894553 kubelet[2445]: I1101 00:41:25.894538 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/06a79739-2cc5-4e9c-be25-f79ee393a010-hubble-tls\") pod \"cilium-fntg2\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " pod="kube-system/cilium-fntg2" Nov 1 00:41:25.894664 kubelet[2445]: I1101 00:41:25.894646 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5efb5317-6556-4147-99ff-0358eda7b8d4-kube-proxy\") pod \"kube-proxy-27rw5\" (UID: \"5efb5317-6556-4147-99ff-0358eda7b8d4\") " pod="kube-system/kube-proxy-27rw5" Nov 1 00:41:25.894768 kubelet[2445]: I1101 00:41:25.894754 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-hostproc\") pod \"cilium-fntg2\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " pod="kube-system/cilium-fntg2" Nov 1 00:41:25.894866 kubelet[2445]: I1101 00:41:25.894852 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-cilium-cgroup\") pod \"cilium-fntg2\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " pod="kube-system/cilium-fntg2" Nov 1 00:41:25.894965 kubelet[2445]: I1101 00:41:25.894950 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-lib-modules\") pod \"cilium-fntg2\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " pod="kube-system/cilium-fntg2" Nov 1 00:41:25.895729 kubelet[2445]: I1101 00:41:25.895704 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/06a79739-2cc5-4e9c-be25-f79ee393a010-clustermesh-secrets\") pod \"cilium-fntg2\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " pod="kube-system/cilium-fntg2" Nov 1 00:41:25.997163 kubelet[2445]: I1101 00:41:25.997123 2445 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:41:26.060233 systemd[1]: Created slice kubepods-besteffort-pod6c9ab0cf_be14_4a66_b9b5_b1ad73f38f4d.slice. Nov 1 00:41:26.097576 kubelet[2445]: I1101 00:41:26.097469 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6r6k5\" (UID: \"6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d\") " pod="kube-system/cilium-operator-6c4d7847fc-6r6k5" Nov 1 00:41:26.097869 kubelet[2445]: I1101 00:41:26.097846 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4wtx\" (UniqueName: \"kubernetes.io/projected/6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d-kube-api-access-z4wtx\") pod \"cilium-operator-6c4d7847fc-6r6k5\" (UID: \"6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d\") " pod="kube-system/cilium-operator-6c4d7847fc-6r6k5" Nov 1 00:41:26.161450 env[1441]: time="2025-11-01T00:41:26.161399966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fntg2,Uid:06a79739-2cc5-4e9c-be25-f79ee393a010,Namespace:kube-system,Attempt:0,}" Nov 1 00:41:26.164568 env[1441]: time="2025-11-01T00:41:26.164530013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-27rw5,Uid:5efb5317-6556-4147-99ff-0358eda7b8d4,Namespace:kube-system,Attempt:0,}" Nov 1 00:41:26.214042 env[1441]: time="2025-11-01T00:41:26.213948565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:41:26.214042 env[1441]: time="2025-11-01T00:41:26.214019566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:41:26.218260 env[1441]: time="2025-11-01T00:41:26.218202229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:41:26.218628 env[1441]: time="2025-11-01T00:41:26.218580335Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466 pid=2531 runtime=io.containerd.runc.v2 Nov 1 00:41:26.223740 env[1441]: time="2025-11-01T00:41:26.223662212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:41:26.223838 env[1441]: time="2025-11-01T00:41:26.223755414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:41:26.223901 env[1441]: time="2025-11-01T00:41:26.223812215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:41:26.224116 env[1441]: time="2025-11-01T00:41:26.224075719Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3b7beb75032041afc719c559004cde3c241ccfcdb786e16043037e5e0ecddb7 pid=2550 runtime=io.containerd.runc.v2 Nov 1 00:41:26.236324 systemd[1]: Started cri-containerd-3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466.scope. Nov 1 00:41:26.254769 systemd[1]: Started cri-containerd-e3b7beb75032041afc719c559004cde3c241ccfcdb786e16043037e5e0ecddb7.scope. Nov 1 00:41:26.280065 env[1441]: time="2025-11-01T00:41:26.279014254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fntg2,Uid:06a79739-2cc5-4e9c-be25-f79ee393a010,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466\"" Nov 1 00:41:26.284337 env[1441]: time="2025-11-01T00:41:26.283104916Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 00:41:26.291043 env[1441]: time="2025-11-01T00:41:26.291009536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-27rw5,Uid:5efb5317-6556-4147-99ff-0358eda7b8d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3b7beb75032041afc719c559004cde3c241ccfcdb786e16043037e5e0ecddb7\"" Nov 1 00:41:26.298723 env[1441]: time="2025-11-01T00:41:26.298686053Z" level=info msg="CreateContainer within sandbox \"e3b7beb75032041afc719c559004cde3c241ccfcdb786e16043037e5e0ecddb7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:41:26.325627 env[1441]: time="2025-11-01T00:41:26.325582962Z" level=info msg="CreateContainer within sandbox \"e3b7beb75032041afc719c559004cde3c241ccfcdb786e16043037e5e0ecddb7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"710caed653f8668fab30943f692dd5a59ebdbb3189532c806a7af4885fcbf28f\"" Nov 1 00:41:26.327745 env[1441]: time="2025-11-01T00:41:26.326342573Z" level=info msg="StartContainer for \"710caed653f8668fab30943f692dd5a59ebdbb3189532c806a7af4885fcbf28f\"" Nov 1 00:41:26.345364 systemd[1]: Started cri-containerd-710caed653f8668fab30943f692dd5a59ebdbb3189532c806a7af4885fcbf28f.scope. Nov 1 00:41:26.363479 env[1441]: time="2025-11-01T00:41:26.363397436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6r6k5,Uid:6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d,Namespace:kube-system,Attempt:0,}" Nov 1 00:41:26.386572 env[1441]: time="2025-11-01T00:41:26.386534288Z" level=info msg="StartContainer for \"710caed653f8668fab30943f692dd5a59ebdbb3189532c806a7af4885fcbf28f\" returns successfully" Nov 1 00:41:26.407316 env[1441]: time="2025-11-01T00:41:26.407242603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:41:26.407478 env[1441]: time="2025-11-01T00:41:26.407292904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:41:26.407478 env[1441]: time="2025-11-01T00:41:26.407306404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:41:26.407606 env[1441]: time="2025-11-01T00:41:26.407473406Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1533d7d53df3f1a818a7029dbebd019f3fe5236946ac7c9af048491857e3d05f pid=2649 runtime=io.containerd.runc.v2 Nov 1 00:41:26.420648 systemd[1]: Started cri-containerd-1533d7d53df3f1a818a7029dbebd019f3fe5236946ac7c9af048491857e3d05f.scope. Nov 1 00:41:26.497287 env[1441]: time="2025-11-01T00:41:26.497236771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6r6k5,Uid:6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1533d7d53df3f1a818a7029dbebd019f3fe5236946ac7c9af048491857e3d05f\"" Nov 1 00:41:27.747170 kubelet[2445]: I1101 00:41:27.747104 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-27rw5" podStartSLOduration=2.7470838239999997 podStartE2EDuration="2.747083824s" podCreationTimestamp="2025-11-01 00:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:41:26.452646993 +0000 UTC m=+6.702159355" watchObservedRunningTime="2025-11-01 00:41:27.747083824 +0000 UTC m=+7.996595986" Nov 1 00:41:32.067441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1472678591.mount: Deactivated successfully. Nov 1 00:41:35.226102 env[1441]: time="2025-11-01T00:41:35.226046954Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:35.232418 env[1441]: time="2025-11-01T00:41:35.232379733Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:35.235428 env[1441]: time="2025-11-01T00:41:35.235397071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:35.235942 env[1441]: time="2025-11-01T00:41:35.235907378Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 1 00:41:35.238693 env[1441]: time="2025-11-01T00:41:35.237858902Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 00:41:35.245352 env[1441]: time="2025-11-01T00:41:35.245321096Z" level=info msg="CreateContainer within sandbox \"3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:41:35.267051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983280203.mount: Deactivated successfully. Nov 1 00:41:35.275165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3283857749.mount: Deactivated successfully. Nov 1 00:41:35.283217 env[1441]: time="2025-11-01T00:41:35.283183073Z" level=info msg="CreateContainer within sandbox \"3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249\"" Nov 1 00:41:35.283635 env[1441]: time="2025-11-01T00:41:35.283559778Z" level=info msg="StartContainer for \"f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249\"" Nov 1 00:41:35.304601 systemd[1]: Started cri-containerd-f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249.scope. Nov 1 00:41:35.340893 env[1441]: time="2025-11-01T00:41:35.340843399Z" level=info msg="StartContainer for \"f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249\" returns successfully" Nov 1 00:41:35.343869 systemd[1]: cri-containerd-f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249.scope: Deactivated successfully. Nov 1 00:41:36.264370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249-rootfs.mount: Deactivated successfully. Nov 1 00:41:39.220034 env[1441]: time="2025-11-01T00:41:39.219984764Z" level=info msg="shim disconnected" id=f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249 Nov 1 00:41:39.220034 env[1441]: time="2025-11-01T00:41:39.220030264Z" level=warning msg="cleaning up after shim disconnected" id=f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249 namespace=k8s.io Nov 1 00:41:39.220034 env[1441]: time="2025-11-01T00:41:39.220040665Z" level=info msg="cleaning up dead shim" Nov 1 00:41:39.228119 env[1441]: time="2025-11-01T00:41:39.228078458Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:41:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2870 runtime=io.containerd.runc.v2\n" Nov 1 00:41:39.488062 env[1441]: time="2025-11-01T00:41:39.487939384Z" level=info msg="CreateContainer within sandbox \"3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:41:39.543098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3820964919.mount: Deactivated successfully. Nov 1 00:41:39.555033 env[1441]: time="2025-11-01T00:41:39.554973064Z" level=info msg="CreateContainer within sandbox \"3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d\"" Nov 1 00:41:39.557020 env[1441]: time="2025-11-01T00:41:39.555848674Z" level=info msg="StartContainer for \"fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d\"" Nov 1 00:41:39.577409 systemd[1]: Started cri-containerd-fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d.scope. Nov 1 00:41:39.611999 env[1441]: time="2025-11-01T00:41:39.611357021Z" level=info msg="StartContainer for \"fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d\" returns successfully" Nov 1 00:41:39.628420 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:41:39.628717 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:41:39.628907 systemd[1]: Stopping systemd-sysctl.service... Nov 1 00:41:39.630956 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:41:39.640890 systemd[1]: cri-containerd-fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d.scope: Deactivated successfully. Nov 1 00:41:39.646520 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:41:39.716131 env[1441]: time="2025-11-01T00:41:39.715965739Z" level=info msg="shim disconnected" id=fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d Nov 1 00:41:39.716466 env[1441]: time="2025-11-01T00:41:39.716443744Z" level=warning msg="cleaning up after shim disconnected" id=fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d namespace=k8s.io Nov 1 00:41:39.716558 env[1441]: time="2025-11-01T00:41:39.716544846Z" level=info msg="cleaning up dead shim" Nov 1 00:41:39.735008 env[1441]: time="2025-11-01T00:41:39.734955260Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:41:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2937 runtime=io.containerd.runc.v2\n" Nov 1 00:41:40.474736 env[1441]: time="2025-11-01T00:41:40.474655069Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:40.481091 env[1441]: time="2025-11-01T00:41:40.481056243Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:40.493592 env[1441]: time="2025-11-01T00:41:40.493557285Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:40.494538 env[1441]: time="2025-11-01T00:41:40.494503196Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 1 00:41:40.495903 env[1441]: time="2025-11-01T00:41:40.495870212Z" level=info msg="CreateContainer within sandbox \"3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:41:40.509048 env[1441]: time="2025-11-01T00:41:40.509011962Z" level=info msg="CreateContainer within sandbox \"1533d7d53df3f1a818a7029dbebd019f3fe5236946ac7c9af048491857e3d05f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 00:41:40.538348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d-rootfs.mount: Deactivated successfully. Nov 1 00:41:40.558514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3793190970.mount: Deactivated successfully. Nov 1 00:41:40.562857 env[1441]: time="2025-11-01T00:41:40.562809377Z" level=info msg="CreateContainer within sandbox \"3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109\"" Nov 1 00:41:40.564701 env[1441]: time="2025-11-01T00:41:40.564663198Z" level=info msg="StartContainer for \"e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109\"" Nov 1 00:41:40.568619 env[1441]: time="2025-11-01T00:41:40.568586143Z" level=info msg="CreateContainer within sandbox \"1533d7d53df3f1a818a7029dbebd019f3fe5236946ac7c9af048491857e3d05f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6\"" Nov 1 00:41:40.569088 env[1441]: time="2025-11-01T00:41:40.569060848Z" level=info msg="StartContainer for \"19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6\"" Nov 1 00:41:40.588509 systemd[1]: Started cri-containerd-e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109.scope. Nov 1 00:41:40.611168 systemd[1]: Started cri-containerd-19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6.scope. Nov 1 00:41:40.641250 systemd[1]: cri-containerd-e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109.scope: Deactivated successfully. Nov 1 00:41:40.644624 env[1441]: time="2025-11-01T00:41:40.644581111Z" level=info msg="StartContainer for \"e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109\" returns successfully" Nov 1 00:41:40.669329 env[1441]: time="2025-11-01T00:41:40.669265993Z" level=info msg="StartContainer for \"19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6\" returns successfully" Nov 1 00:41:41.201296 env[1441]: time="2025-11-01T00:41:41.201237929Z" level=info msg="shim disconnected" id=e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109 Nov 1 00:41:41.201635 env[1441]: time="2025-11-01T00:41:41.201328030Z" level=warning msg="cleaning up after shim disconnected" id=e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109 namespace=k8s.io Nov 1 00:41:41.201635 env[1441]: time="2025-11-01T00:41:41.201341930Z" level=info msg="cleaning up dead shim" Nov 1 00:41:41.216084 env[1441]: time="2025-11-01T00:41:41.216045495Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:41:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3039 runtime=io.containerd.runc.v2\n" Nov 1 00:41:41.488674 env[1441]: time="2025-11-01T00:41:41.488558151Z" level=info msg="CreateContainer within sandbox \"3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:41:41.523488 env[1441]: time="2025-11-01T00:41:41.523433243Z" level=info msg="CreateContainer within sandbox \"3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6\"" Nov 1 00:41:41.524290 env[1441]: time="2025-11-01T00:41:41.524260052Z" level=info msg="StartContainer for \"0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6\"" Nov 1 00:41:41.539613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109-rootfs.mount: Deactivated successfully. Nov 1 00:41:41.570215 systemd[1]: run-containerd-runc-k8s.io-0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6-runc.uTVNMv.mount: Deactivated successfully. Nov 1 00:41:41.580186 systemd[1]: Started cri-containerd-0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6.scope. Nov 1 00:41:41.643554 env[1441]: time="2025-11-01T00:41:41.643510189Z" level=info msg="StartContainer for \"0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6\" returns successfully" Nov 1 00:41:41.646359 systemd[1]: cri-containerd-0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6.scope: Deactivated successfully. Nov 1 00:41:41.677886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6-rootfs.mount: Deactivated successfully. Nov 1 00:41:41.696787 env[1441]: time="2025-11-01T00:41:41.696731086Z" level=info msg="shim disconnected" id=0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6 Nov 1 00:41:41.697050 env[1441]: time="2025-11-01T00:41:41.696793087Z" level=warning msg="cleaning up after shim disconnected" id=0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6 namespace=k8s.io Nov 1 00:41:41.697050 env[1441]: time="2025-11-01T00:41:41.696805887Z" level=info msg="cleaning up dead shim" Nov 1 00:41:41.715121 env[1441]: time="2025-11-01T00:41:41.715065592Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:41:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3093 runtime=io.containerd.runc.v2\n" Nov 1 00:41:41.744644 kubelet[2445]: I1101 00:41:41.744340 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6r6k5" podStartSLOduration=1.746724197 podStartE2EDuration="15.74432082s" podCreationTimestamp="2025-11-01 00:41:26 +0000 UTC" firstStartedPulling="2025-11-01 00:41:26.498906496 +0000 UTC m=+6.748418658" lastFinishedPulling="2025-11-01 00:41:40.496503119 +0000 UTC m=+20.746015281" observedRunningTime="2025-11-01 00:41:41.743348109 +0000 UTC m=+21.992860371" watchObservedRunningTime="2025-11-01 00:41:41.74432082 +0000 UTC m=+21.993832982" Nov 1 00:41:42.494009 env[1441]: time="2025-11-01T00:41:42.493702724Z" level=info msg="CreateContainer within sandbox \"3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:41:42.528050 env[1441]: time="2025-11-01T00:41:42.527995901Z" level=info msg="CreateContainer within sandbox \"3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789\"" Nov 1 00:41:42.530954 env[1441]: time="2025-11-01T00:41:42.530914333Z" level=info msg="StartContainer for \"558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789\"" Nov 1 00:41:42.559861 systemd[1]: Started cri-containerd-558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789.scope. Nov 1 00:41:42.606214 env[1441]: time="2025-11-01T00:41:42.606168862Z" level=info msg="StartContainer for \"558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789\" returns successfully" Nov 1 00:41:42.643952 systemd[1]: run-containerd-runc-k8s.io-558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789-runc.royoDq.mount: Deactivated successfully. Nov 1 00:41:42.772517 kubelet[2445]: I1101 00:41:42.772487 2445 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:41:42.855959 systemd[1]: Created slice kubepods-burstable-podc3b38b22_b09e_4604_86aa_5793c3214ef0.slice. Nov 1 00:41:42.869135 systemd[1]: Created slice kubepods-burstable-podaa7d7bac_b172_4223_b44b_8291e9dfb62d.slice. Nov 1 00:41:42.928502 kubelet[2445]: I1101 00:41:42.928458 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa7d7bac-b172-4223-b44b-8291e9dfb62d-config-volume\") pod \"coredns-674b8bbfcf-54chv\" (UID: \"aa7d7bac-b172-4223-b44b-8291e9dfb62d\") " pod="kube-system/coredns-674b8bbfcf-54chv" Nov 1 00:41:42.928502 kubelet[2445]: I1101 00:41:42.928500 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6bvx\" (UniqueName: \"kubernetes.io/projected/aa7d7bac-b172-4223-b44b-8291e9dfb62d-kube-api-access-k6bvx\") pod \"coredns-674b8bbfcf-54chv\" (UID: \"aa7d7bac-b172-4223-b44b-8291e9dfb62d\") " pod="kube-system/coredns-674b8bbfcf-54chv" Nov 1 00:41:42.928761 kubelet[2445]: I1101 00:41:42.928532 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjlgj\" (UniqueName: \"kubernetes.io/projected/c3b38b22-b09e-4604-86aa-5793c3214ef0-kube-api-access-jjlgj\") pod \"coredns-674b8bbfcf-pxwbr\" (UID: \"c3b38b22-b09e-4604-86aa-5793c3214ef0\") " pod="kube-system/coredns-674b8bbfcf-pxwbr" Nov 1 00:41:42.928761 kubelet[2445]: I1101 00:41:42.928555 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3b38b22-b09e-4604-86aa-5793c3214ef0-config-volume\") pod \"coredns-674b8bbfcf-pxwbr\" (UID: \"c3b38b22-b09e-4604-86aa-5793c3214ef0\") " pod="kube-system/coredns-674b8bbfcf-pxwbr" Nov 1 00:41:43.167165 env[1441]: time="2025-11-01T00:41:43.167047705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pxwbr,Uid:c3b38b22-b09e-4604-86aa-5793c3214ef0,Namespace:kube-system,Attempt:0,}" Nov 1 00:41:43.176238 env[1441]: time="2025-11-01T00:41:43.176201804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-54chv,Uid:aa7d7bac-b172-4223-b44b-8291e9dfb62d,Namespace:kube-system,Attempt:0,}" Nov 1 00:41:44.709419 systemd-networkd[1584]: cilium_host: Link UP Nov 1 00:41:44.709545 systemd-networkd[1584]: cilium_net: Link UP Nov 1 00:41:44.719586 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Nov 1 00:41:44.719749 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Nov 1 00:41:44.713820 systemd-networkd[1584]: cilium_net: Gained carrier Nov 1 00:41:44.720228 systemd-networkd[1584]: cilium_host: Gained carrier Nov 1 00:41:44.914735 systemd-networkd[1584]: cilium_vxlan: Link UP Nov 1 00:41:44.914747 systemd-networkd[1584]: cilium_vxlan: Gained carrier Nov 1 00:41:44.995130 systemd-networkd[1584]: cilium_net: Gained IPv6LL Nov 1 00:41:45.205084 kernel: NET: Registered PF_ALG protocol family Nov 1 00:41:45.563152 systemd-networkd[1584]: cilium_host: Gained IPv6LL Nov 1 00:41:46.026334 systemd-networkd[1584]: lxc_health: Link UP Nov 1 00:41:46.054051 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:41:46.054673 systemd-networkd[1584]: lxc_health: Gained carrier Nov 1 00:41:46.191396 kubelet[2445]: I1101 00:41:46.191324 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fntg2" podStartSLOduration=12.236784047 podStartE2EDuration="21.191305332s" podCreationTimestamp="2025-11-01 00:41:25 +0000 UTC" firstStartedPulling="2025-11-01 00:41:26.282473506 +0000 UTC m=+6.531985668" lastFinishedPulling="2025-11-01 00:41:35.236994791 +0000 UTC m=+15.486506953" observedRunningTime="2025-11-01 00:41:43.513072147 +0000 UTC m=+23.762584409" watchObservedRunningTime="2025-11-01 00:41:46.191305332 +0000 UTC m=+26.440817594" Nov 1 00:41:46.238933 systemd-networkd[1584]: lxc5cf281151036: Link UP Nov 1 00:41:46.246056 kernel: eth0: renamed from tmp302e8 Nov 1 00:41:46.258012 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5cf281151036: link becomes ready Nov 1 00:41:46.257542 systemd-networkd[1584]: lxc5cf281151036: Gained carrier Nov 1 00:41:46.268268 systemd-networkd[1584]: cilium_vxlan: Gained IPv6LL Nov 1 00:41:46.281338 systemd-networkd[1584]: lxcc1eb2d020c76: Link UP Nov 1 00:41:46.291037 kernel: eth0: renamed from tmp45145 Nov 1 00:41:46.299111 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc1eb2d020c76: link becomes ready Nov 1 00:41:46.298507 systemd-networkd[1584]: lxcc1eb2d020c76: Gained carrier Nov 1 00:41:47.383116 kubelet[2445]: I1101 00:41:47.383080 2445 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:41:47.419206 systemd-networkd[1584]: lxcc1eb2d020c76: Gained IPv6LL Nov 1 00:41:47.675214 systemd-networkd[1584]: lxc5cf281151036: Gained IPv6LL Nov 1 00:41:47.739118 systemd-networkd[1584]: lxc_health: Gained IPv6LL Nov 1 00:41:50.071831 env[1441]: time="2025-11-01T00:41:50.071761560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:41:50.072385 env[1441]: time="2025-11-01T00:41:50.072351365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:41:50.072522 env[1441]: time="2025-11-01T00:41:50.072497167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:41:50.072764 env[1441]: time="2025-11-01T00:41:50.072731169Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/45145cc52865ac20dd9f8b4427458459c7aad3e2af5e70749fe405bcff117bd6 pid=3647 runtime=io.containerd.runc.v2 Nov 1 00:41:50.076555 env[1441]: time="2025-11-01T00:41:50.076504005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:41:50.076714 env[1441]: time="2025-11-01T00:41:50.076687807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:41:50.076839 env[1441]: time="2025-11-01T00:41:50.076815208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:41:50.077124 env[1441]: time="2025-11-01T00:41:50.077080511Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/302e8d670d0064e59ad5c148db80530a8c305310009b23268aef5c4076ce0fb9 pid=3650 runtime=io.containerd.runc.v2 Nov 1 00:41:50.118972 systemd[1]: Started cri-containerd-302e8d670d0064e59ad5c148db80530a8c305310009b23268aef5c4076ce0fb9.scope. Nov 1 00:41:50.120779 systemd[1]: Started cri-containerd-45145cc52865ac20dd9f8b4427458459c7aad3e2af5e70749fe405bcff117bd6.scope. Nov 1 00:41:50.194745 env[1441]: time="2025-11-01T00:41:50.194702339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pxwbr,Uid:c3b38b22-b09e-4604-86aa-5793c3214ef0,Namespace:kube-system,Attempt:0,} returns sandbox id \"302e8d670d0064e59ad5c148db80530a8c305310009b23268aef5c4076ce0fb9\"" Nov 1 00:41:50.204009 env[1441]: time="2025-11-01T00:41:50.203963028Z" level=info msg="CreateContainer within sandbox \"302e8d670d0064e59ad5c148db80530a8c305310009b23268aef5c4076ce0fb9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:41:50.224325 env[1441]: time="2025-11-01T00:41:50.224286823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-54chv,Uid:aa7d7bac-b172-4223-b44b-8291e9dfb62d,Namespace:kube-system,Attempt:0,} returns sandbox id \"45145cc52865ac20dd9f8b4427458459c7aad3e2af5e70749fe405bcff117bd6\"" Nov 1 00:41:50.234587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2124579261.mount: Deactivated successfully. Nov 1 00:41:50.236813 env[1441]: time="2025-11-01T00:41:50.236776242Z" level=info msg="CreateContainer within sandbox \"45145cc52865ac20dd9f8b4427458459c7aad3e2af5e70749fe405bcff117bd6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:41:50.249501 env[1441]: time="2025-11-01T00:41:50.249466064Z" level=info msg="CreateContainer within sandbox \"302e8d670d0064e59ad5c148db80530a8c305310009b23268aef5c4076ce0fb9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f1526999dd27caec69e7674f10a1d95e20e30244a04e7c556e22e1b3c34aef3e\"" Nov 1 00:41:50.251633 env[1441]: time="2025-11-01T00:41:50.250030769Z" level=info msg="StartContainer for \"f1526999dd27caec69e7674f10a1d95e20e30244a04e7c556e22e1b3c34aef3e\"" Nov 1 00:41:50.275365 env[1441]: time="2025-11-01T00:41:50.275322012Z" level=info msg="CreateContainer within sandbox \"45145cc52865ac20dd9f8b4427458459c7aad3e2af5e70749fe405bcff117bd6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e7f5e7e32e4ad43b8fb5d3c5070136439c74bc302b3aa4c0537f1684785a7b0f\"" Nov 1 00:41:50.276332 env[1441]: time="2025-11-01T00:41:50.276297021Z" level=info msg="StartContainer for \"e7f5e7e32e4ad43b8fb5d3c5070136439c74bc302b3aa4c0537f1684785a7b0f\"" Nov 1 00:41:50.284509 systemd[1]: Started cri-containerd-f1526999dd27caec69e7674f10a1d95e20e30244a04e7c556e22e1b3c34aef3e.scope. Nov 1 00:41:50.314342 systemd[1]: Started cri-containerd-e7f5e7e32e4ad43b8fb5d3c5070136439c74bc302b3aa4c0537f1684785a7b0f.scope. Nov 1 00:41:50.341520 env[1441]: time="2025-11-01T00:41:50.341412646Z" level=info msg="StartContainer for \"f1526999dd27caec69e7674f10a1d95e20e30244a04e7c556e22e1b3c34aef3e\" returns successfully" Nov 1 00:41:50.384160 env[1441]: time="2025-11-01T00:41:50.384109955Z" level=info msg="StartContainer for \"e7f5e7e32e4ad43b8fb5d3c5070136439c74bc302b3aa4c0537f1684785a7b0f\" returns successfully" Nov 1 00:41:50.555618 kubelet[2445]: I1101 00:41:50.555562 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pxwbr" podStartSLOduration=24.5555405 podStartE2EDuration="24.5555405s" podCreationTimestamp="2025-11-01 00:41:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:41:50.5242589 +0000 UTC m=+30.773771062" watchObservedRunningTime="2025-11-01 00:41:50.5555405 +0000 UTC m=+30.805052762" Nov 1 00:43:25.605703 systemd[1]: Started sshd@5-10.200.4.33:22-10.200.16.10:46100.service. Nov 1 00:43:26.204562 sshd[3816]: Accepted publickey for core from 10.200.16.10 port 46100 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:43:26.206252 sshd[3816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:26.211878 systemd-logind[1429]: New session 8 of user core. Nov 1 00:43:26.212515 systemd[1]: Started session-8.scope. Nov 1 00:43:26.784290 sshd[3816]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:26.787699 systemd[1]: sshd@5-10.200.4.33:22-10.200.16.10:46100.service: Deactivated successfully. Nov 1 00:43:26.788871 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:43:26.789551 systemd-logind[1429]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:43:26.790396 systemd-logind[1429]: Removed session 8. Nov 1 00:43:31.885278 systemd[1]: Started sshd@6-10.200.4.33:22-10.200.16.10:48472.service. Nov 1 00:43:32.482752 sshd[3831]: Accepted publickey for core from 10.200.16.10 port 48472 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:43:32.484568 sshd[3831]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:32.490417 systemd[1]: Started session-9.scope. Nov 1 00:43:32.490998 systemd-logind[1429]: New session 9 of user core. Nov 1 00:43:32.963513 sshd[3831]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:32.966771 systemd[1]: sshd@6-10.200.4.33:22-10.200.16.10:48472.service: Deactivated successfully. Nov 1 00:43:32.968300 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:43:32.968332 systemd-logind[1429]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:43:32.969699 systemd-logind[1429]: Removed session 9. Nov 1 00:43:38.064004 systemd[1]: Started sshd@7-10.200.4.33:22-10.200.16.10:48478.service. Nov 1 00:43:38.660270 sshd[3843]: Accepted publickey for core from 10.200.16.10 port 48478 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:43:38.661843 sshd[3843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:38.666746 systemd[1]: Started session-10.scope. Nov 1 00:43:38.667496 systemd-logind[1429]: New session 10 of user core. Nov 1 00:43:39.142881 sshd[3843]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:39.146367 systemd[1]: sshd@7-10.200.4.33:22-10.200.16.10:48478.service: Deactivated successfully. Nov 1 00:43:39.147459 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:43:39.148395 systemd-logind[1429]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:43:39.149401 systemd-logind[1429]: Removed session 10. Nov 1 00:43:44.243034 systemd[1]: Started sshd@8-10.200.4.33:22-10.200.16.10:49300.service. Nov 1 00:43:44.832229 sshd[3856]: Accepted publickey for core from 10.200.16.10 port 49300 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:43:44.833701 sshd[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:44.838233 systemd-logind[1429]: New session 11 of user core. Nov 1 00:43:44.838848 systemd[1]: Started session-11.scope. Nov 1 00:43:45.320547 sshd[3856]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:45.323892 systemd[1]: sshd@8-10.200.4.33:22-10.200.16.10:49300.service: Deactivated successfully. Nov 1 00:43:45.325081 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:43:45.325915 systemd-logind[1429]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:43:45.326931 systemd-logind[1429]: Removed session 11. Nov 1 00:43:45.419923 systemd[1]: Started sshd@9-10.200.4.33:22-10.200.16.10:49308.service. Nov 1 00:43:46.015257 sshd[3869]: Accepted publickey for core from 10.200.16.10 port 49308 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:43:46.016908 sshd[3869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:46.022765 systemd[1]: Started session-12.scope. Nov 1 00:43:46.023636 systemd-logind[1429]: New session 12 of user core. Nov 1 00:43:46.534879 sshd[3869]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:46.538166 systemd[1]: sshd@9-10.200.4.33:22-10.200.16.10:49308.service: Deactivated successfully. Nov 1 00:43:46.539692 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:43:46.539727 systemd-logind[1429]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:43:46.540791 systemd-logind[1429]: Removed session 12. Nov 1 00:43:46.648938 systemd[1]: Started sshd@10-10.200.4.33:22-10.200.16.10:49314.service. Nov 1 00:43:47.244746 sshd[3878]: Accepted publickey for core from 10.200.16.10 port 49314 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:43:47.246448 sshd[3878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:47.251297 systemd-logind[1429]: New session 13 of user core. Nov 1 00:43:47.251770 systemd[1]: Started session-13.scope. Nov 1 00:43:47.744258 sshd[3878]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:47.747671 systemd[1]: sshd@10-10.200.4.33:22-10.200.16.10:49314.service: Deactivated successfully. Nov 1 00:43:47.748596 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:43:47.749308 systemd-logind[1429]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:43:47.750076 systemd-logind[1429]: Removed session 13. Nov 1 00:43:52.845780 systemd[1]: Started sshd@11-10.200.4.33:22-10.200.16.10:40538.service. Nov 1 00:43:53.436516 sshd[3890]: Accepted publickey for core from 10.200.16.10 port 40538 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:43:53.438240 sshd[3890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:53.445018 systemd[1]: Started session-14.scope. Nov 1 00:43:53.446807 systemd-logind[1429]: New session 14 of user core. Nov 1 00:43:53.911938 sshd[3890]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:53.915428 systemd[1]: sshd@11-10.200.4.33:22-10.200.16.10:40538.service: Deactivated successfully. Nov 1 00:43:53.916379 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:43:53.917155 systemd-logind[1429]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:43:53.917922 systemd-logind[1429]: Removed session 14. Nov 1 00:43:59.011841 systemd[1]: Started sshd@12-10.200.4.33:22-10.200.16.10:40542.service. Nov 1 00:43:59.604797 sshd[3904]: Accepted publickey for core from 10.200.16.10 port 40542 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:43:59.606332 sshd[3904]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:59.611118 systemd[1]: Started session-15.scope. Nov 1 00:43:59.611721 systemd-logind[1429]: New session 15 of user core. Nov 1 00:44:00.088101 sshd[3904]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:00.091459 systemd[1]: sshd@12-10.200.4.33:22-10.200.16.10:40542.service: Deactivated successfully. Nov 1 00:44:00.092617 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:44:00.093565 systemd-logind[1429]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:44:00.094539 systemd-logind[1429]: Removed session 15. Nov 1 00:44:00.188814 systemd[1]: Started sshd@13-10.200.4.33:22-10.200.16.10:39488.service. Nov 1 00:44:00.790017 sshd[3915]: Accepted publickey for core from 10.200.16.10 port 39488 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:44:00.790825 sshd[3915]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:00.796464 systemd[1]: Started session-16.scope. Nov 1 00:44:00.796961 systemd-logind[1429]: New session 16 of user core. Nov 1 00:44:01.288859 sshd[3915]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:01.291964 systemd[1]: sshd@13-10.200.4.33:22-10.200.16.10:39488.service: Deactivated successfully. Nov 1 00:44:01.292930 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:44:01.293572 systemd-logind[1429]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:44:01.294444 systemd-logind[1429]: Removed session 16. Nov 1 00:44:01.388175 systemd[1]: Started sshd@14-10.200.4.33:22-10.200.16.10:39494.service. Nov 1 00:44:01.984768 sshd[3924]: Accepted publickey for core from 10.200.16.10 port 39494 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:44:01.986539 sshd[3924]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:01.992542 systemd[1]: Started session-17.scope. Nov 1 00:44:01.993323 systemd-logind[1429]: New session 17 of user core. Nov 1 00:44:02.914227 sshd[3924]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:02.917613 systemd-logind[1429]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:44:02.917891 systemd[1]: sshd@14-10.200.4.33:22-10.200.16.10:39494.service: Deactivated successfully. Nov 1 00:44:02.918916 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:44:02.920140 systemd-logind[1429]: Removed session 17. Nov 1 00:44:03.015557 systemd[1]: Started sshd@15-10.200.4.33:22-10.200.16.10:39502.service. Nov 1 00:44:03.608325 sshd[3941]: Accepted publickey for core from 10.200.16.10 port 39502 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:44:03.609965 sshd[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:03.614864 systemd[1]: Started session-18.scope. Nov 1 00:44:03.615325 systemd-logind[1429]: New session 18 of user core. Nov 1 00:44:04.192757 sshd[3941]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:04.196304 systemd[1]: sshd@15-10.200.4.33:22-10.200.16.10:39502.service: Deactivated successfully. Nov 1 00:44:04.197367 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:44:04.198588 systemd-logind[1429]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:44:04.199655 systemd-logind[1429]: Removed session 18. Nov 1 00:44:04.294908 systemd[1]: Started sshd@16-10.200.4.33:22-10.200.16.10:39504.service. Nov 1 00:44:04.896366 sshd[3950]: Accepted publickey for core from 10.200.16.10 port 39504 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:44:04.898141 sshd[3950]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:04.904076 systemd-logind[1429]: New session 19 of user core. Nov 1 00:44:04.904137 systemd[1]: Started session-19.scope. Nov 1 00:44:05.374956 sshd[3950]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:05.378311 systemd[1]: sshd@16-10.200.4.33:22-10.200.16.10:39504.service: Deactivated successfully. Nov 1 00:44:05.379298 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:44:05.380036 systemd-logind[1429]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:44:05.380895 systemd-logind[1429]: Removed session 19. Nov 1 00:44:10.474397 systemd[1]: Started sshd@17-10.200.4.33:22-10.200.16.10:48904.service. Nov 1 00:44:11.064732 sshd[3964]: Accepted publickey for core from 10.200.16.10 port 48904 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:44:11.066230 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:11.071251 systemd[1]: Started session-20.scope. Nov 1 00:44:11.071698 systemd-logind[1429]: New session 20 of user core. Nov 1 00:44:11.542927 sshd[3964]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:11.546691 systemd[1]: sshd@17-10.200.4.33:22-10.200.16.10:48904.service: Deactivated successfully. Nov 1 00:44:11.547745 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:44:11.548320 systemd-logind[1429]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:44:11.549518 systemd-logind[1429]: Removed session 20. Nov 1 00:44:16.643954 systemd[1]: Started sshd@18-10.200.4.33:22-10.200.16.10:48914.service. Nov 1 00:44:17.241782 sshd[3976]: Accepted publickey for core from 10.200.16.10 port 48914 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:44:17.243531 sshd[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:17.248415 systemd[1]: Started session-21.scope. Nov 1 00:44:17.248952 systemd-logind[1429]: New session 21 of user core. Nov 1 00:44:17.820932 sshd[3976]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:17.824328 systemd[1]: sshd@18-10.200.4.33:22-10.200.16.10:48914.service: Deactivated successfully. Nov 1 00:44:17.825528 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:44:17.826473 systemd-logind[1429]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:44:17.827459 systemd-logind[1429]: Removed session 21. Nov 1 00:44:17.921276 systemd[1]: Started sshd@19-10.200.4.33:22-10.200.16.10:48922.service. Nov 1 00:44:18.535020 sshd[3987]: Accepted publickey for core from 10.200.16.10 port 48922 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:44:18.536675 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:18.542453 systemd-logind[1429]: New session 22 of user core. Nov 1 00:44:18.543074 systemd[1]: Started session-22.scope. Nov 1 00:44:20.257123 kubelet[2445]: I1101 00:44:20.257019 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-54chv" podStartSLOduration=174.256997433 podStartE2EDuration="2m54.256997433s" podCreationTimestamp="2025-11-01 00:41:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:41:50.575575792 +0000 UTC m=+30.825088054" watchObservedRunningTime="2025-11-01 00:44:20.256997433 +0000 UTC m=+180.506509595" Nov 1 00:44:20.270844 env[1441]: time="2025-11-01T00:44:20.270797163Z" level=info msg="StopContainer for \"19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6\" with timeout 30 (s)" Nov 1 00:44:20.273014 env[1441]: time="2025-11-01T00:44:20.272934298Z" level=info msg="Stop container \"19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6\" with signal terminated" Nov 1 00:44:20.285223 systemd[1]: run-containerd-runc-k8s.io-558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789-runc.eOM7sV.mount: Deactivated successfully. Nov 1 00:44:20.295750 systemd[1]: cri-containerd-19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6.scope: Deactivated successfully. Nov 1 00:44:20.317776 env[1441]: time="2025-11-01T00:44:20.317708144Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:44:20.326452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6-rootfs.mount: Deactivated successfully. Nov 1 00:44:20.330829 env[1441]: time="2025-11-01T00:44:20.330788662Z" level=info msg="StopContainer for \"558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789\" with timeout 2 (s)" Nov 1 00:44:20.331280 env[1441]: time="2025-11-01T00:44:20.331244069Z" level=info msg="Stop container \"558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789\" with signal terminated" Nov 1 00:44:20.341819 systemd-networkd[1584]: lxc_health: Link DOWN Nov 1 00:44:20.341835 systemd-networkd[1584]: lxc_health: Lost carrier Nov 1 00:44:20.360263 env[1441]: time="2025-11-01T00:44:20.358480323Z" level=info msg="shim disconnected" id=19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6 Nov 1 00:44:20.360263 env[1441]: time="2025-11-01T00:44:20.359005531Z" level=warning msg="cleaning up after shim disconnected" id=19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6 namespace=k8s.io Nov 1 00:44:20.360263 env[1441]: time="2025-11-01T00:44:20.359021632Z" level=info msg="cleaning up dead shim" Nov 1 00:44:20.367362 systemd[1]: cri-containerd-558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789.scope: Deactivated successfully. Nov 1 00:44:20.367671 systemd[1]: cri-containerd-558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789.scope: Consumed 7.378s CPU time. Nov 1 00:44:20.375931 env[1441]: time="2025-11-01T00:44:20.375885212Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4041 runtime=io.containerd.runc.v2\n" Nov 1 00:44:20.381202 env[1441]: time="2025-11-01T00:44:20.381133500Z" level=info msg="StopContainer for \"19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6\" returns successfully" Nov 1 00:44:20.382005 env[1441]: time="2025-11-01T00:44:20.381964014Z" level=info msg="StopPodSandbox for \"1533d7d53df3f1a818a7029dbebd019f3fe5236946ac7c9af048491857e3d05f\"" Nov 1 00:44:20.382342 env[1441]: time="2025-11-01T00:44:20.382315520Z" level=info msg="Container to stop \"19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:20.387567 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1533d7d53df3f1a818a7029dbebd019f3fe5236946ac7c9af048491857e3d05f-shm.mount: Deactivated successfully. Nov 1 00:44:20.397163 systemd[1]: cri-containerd-1533d7d53df3f1a818a7029dbebd019f3fe5236946ac7c9af048491857e3d05f.scope: Deactivated successfully. Nov 1 00:44:20.410920 env[1441]: time="2025-11-01T00:44:20.410862995Z" level=info msg="shim disconnected" id=558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789 Nov 1 00:44:20.411278 env[1441]: time="2025-11-01T00:44:20.411254201Z" level=warning msg="cleaning up after shim disconnected" id=558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789 namespace=k8s.io Nov 1 00:44:20.413915 env[1441]: time="2025-11-01T00:44:20.413784843Z" level=info msg="cleaning up dead shim" Nov 1 00:44:20.430057 env[1441]: time="2025-11-01T00:44:20.430017014Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4082 runtime=io.containerd.runc.v2\n" Nov 1 00:44:20.436932 env[1441]: time="2025-11-01T00:44:20.436888328Z" level=info msg="StopContainer for \"558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789\" returns successfully" Nov 1 00:44:20.437607 env[1441]: time="2025-11-01T00:44:20.437580140Z" level=info msg="StopPodSandbox for \"3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466\"" Nov 1 00:44:20.437712 env[1441]: time="2025-11-01T00:44:20.437658641Z" level=info msg="Container to stop \"e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:20.437712 env[1441]: time="2025-11-01T00:44:20.437680141Z" level=info msg="Container to stop \"0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:20.437712 env[1441]: time="2025-11-01T00:44:20.437695342Z" level=info msg="Container to stop \"558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:20.437841 env[1441]: time="2025-11-01T00:44:20.437710342Z" level=info msg="Container to stop \"f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:20.437841 env[1441]: time="2025-11-01T00:44:20.437727942Z" level=info msg="Container to stop \"fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:20.447059 systemd[1]: cri-containerd-3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466.scope: Deactivated successfully. Nov 1 00:44:20.453284 env[1441]: time="2025-11-01T00:44:20.452942895Z" level=info msg="shim disconnected" id=1533d7d53df3f1a818a7029dbebd019f3fe5236946ac7c9af048491857e3d05f Nov 1 00:44:20.453284 env[1441]: time="2025-11-01T00:44:20.453247100Z" level=warning msg="cleaning up after shim disconnected" id=1533d7d53df3f1a818a7029dbebd019f3fe5236946ac7c9af048491857e3d05f namespace=k8s.io Nov 1 00:44:20.453284 env[1441]: time="2025-11-01T00:44:20.453264701Z" level=info msg="cleaning up dead shim" Nov 1 00:44:20.469055 env[1441]: time="2025-11-01T00:44:20.469014463Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4110 runtime=io.containerd.runc.v2\n" Nov 1 00:44:20.469713 env[1441]: time="2025-11-01T00:44:20.469678174Z" level=info msg="TearDown network for sandbox \"1533d7d53df3f1a818a7029dbebd019f3fe5236946ac7c9af048491857e3d05f\" successfully" Nov 1 00:44:20.469866 env[1441]: time="2025-11-01T00:44:20.469842777Z" level=info msg="StopPodSandbox for \"1533d7d53df3f1a818a7029dbebd019f3fe5236946ac7c9af048491857e3d05f\" returns successfully" Nov 1 00:44:20.475074 kubelet[2445]: E1101 00:44:20.474999 2445 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:44:20.482445 env[1441]: time="2025-11-01T00:44:20.482399186Z" level=info msg="shim disconnected" id=3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466 Nov 1 00:44:20.482578 env[1441]: time="2025-11-01T00:44:20.482450487Z" level=warning msg="cleaning up after shim disconnected" id=3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466 namespace=k8s.io Nov 1 00:44:20.482578 env[1441]: time="2025-11-01T00:44:20.482462887Z" level=info msg="cleaning up dead shim" Nov 1 00:44:20.492290 env[1441]: time="2025-11-01T00:44:20.492258750Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4137 runtime=io.containerd.runc.v2\n" Nov 1 00:44:20.492593 env[1441]: time="2025-11-01T00:44:20.492567355Z" level=info msg="TearDown network for sandbox \"3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466\" successfully" Nov 1 00:44:20.492693 env[1441]: time="2025-11-01T00:44:20.492591556Z" level=info msg="StopPodSandbox for \"3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466\" returns successfully" Nov 1 00:44:20.618813 kubelet[2445]: I1101 00:44:20.618692 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4phdb\" (UniqueName: \"kubernetes.io/projected/06a79739-2cc5-4e9c-be25-f79ee393a010-kube-api-access-4phdb\") pod \"06a79739-2cc5-4e9c-be25-f79ee393a010\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " Nov 1 00:44:20.618813 kubelet[2445]: I1101 00:44:20.618797 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-cilium-cgroup\") pod \"06a79739-2cc5-4e9c-be25-f79ee393a010\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " Nov 1 00:44:20.619111 kubelet[2445]: I1101 00:44:20.618834 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-etc-cni-netd\") pod \"06a79739-2cc5-4e9c-be25-f79ee393a010\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " Nov 1 00:44:20.619111 kubelet[2445]: I1101 00:44:20.618858 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06a79739-2cc5-4e9c-be25-f79ee393a010-cilium-config-path\") pod \"06a79739-2cc5-4e9c-be25-f79ee393a010\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " Nov 1 00:44:20.619111 kubelet[2445]: I1101 00:44:20.618876 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-host-proc-sys-net\") pod \"06a79739-2cc5-4e9c-be25-f79ee393a010\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " Nov 1 00:44:20.619111 kubelet[2445]: I1101 00:44:20.618897 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/06a79739-2cc5-4e9c-be25-f79ee393a010-clustermesh-secrets\") pod \"06a79739-2cc5-4e9c-be25-f79ee393a010\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " Nov 1 00:44:20.619111 kubelet[2445]: I1101 00:44:20.618918 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4wtx\" (UniqueName: \"kubernetes.io/projected/6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d-kube-api-access-z4wtx\") pod \"6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d\" (UID: \"6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d\") " Nov 1 00:44:20.619111 kubelet[2445]: I1101 00:44:20.618943 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-cilium-run\") pod \"06a79739-2cc5-4e9c-be25-f79ee393a010\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " Nov 1 00:44:20.619351 kubelet[2445]: I1101 00:44:20.618963 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-xtables-lock\") pod \"06a79739-2cc5-4e9c-be25-f79ee393a010\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " Nov 1 00:44:20.619351 kubelet[2445]: I1101 00:44:20.618994 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/06a79739-2cc5-4e9c-be25-f79ee393a010-hubble-tls\") pod \"06a79739-2cc5-4e9c-be25-f79ee393a010\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " Nov 1 00:44:20.619351 kubelet[2445]: I1101 00:44:20.619017 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d-cilium-config-path\") pod \"6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d\" (UID: \"6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d\") " Nov 1 00:44:20.619351 kubelet[2445]: I1101 00:44:20.619037 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-host-proc-sys-kernel\") pod \"06a79739-2cc5-4e9c-be25-f79ee393a010\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " Nov 1 00:44:20.619351 kubelet[2445]: I1101 00:44:20.619055 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-bpf-maps\") pod \"06a79739-2cc5-4e9c-be25-f79ee393a010\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " Nov 1 00:44:20.619351 kubelet[2445]: I1101 00:44:20.619073 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-hostproc\") pod \"06a79739-2cc5-4e9c-be25-f79ee393a010\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " Nov 1 00:44:20.619624 kubelet[2445]: I1101 00:44:20.619093 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-lib-modules\") pod \"06a79739-2cc5-4e9c-be25-f79ee393a010\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " Nov 1 00:44:20.619624 kubelet[2445]: I1101 00:44:20.619111 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-cni-path\") pod \"06a79739-2cc5-4e9c-be25-f79ee393a010\" (UID: \"06a79739-2cc5-4e9c-be25-f79ee393a010\") " Nov 1 00:44:20.619624 kubelet[2445]: I1101 00:44:20.619197 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-cni-path" (OuterVolumeSpecName: "cni-path") pod "06a79739-2cc5-4e9c-be25-f79ee393a010" (UID: "06a79739-2cc5-4e9c-be25-f79ee393a010"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:20.619821 kubelet[2445]: I1101 00:44:20.619795 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "06a79739-2cc5-4e9c-be25-f79ee393a010" (UID: "06a79739-2cc5-4e9c-be25-f79ee393a010"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:20.619963 kubelet[2445]: I1101 00:44:20.619942 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "06a79739-2cc5-4e9c-be25-f79ee393a010" (UID: "06a79739-2cc5-4e9c-be25-f79ee393a010"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:20.620115 kubelet[2445]: I1101 00:44:20.620085 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "06a79739-2cc5-4e9c-be25-f79ee393a010" (UID: "06a79739-2cc5-4e9c-be25-f79ee393a010"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:20.621651 kubelet[2445]: I1101 00:44:20.621611 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "06a79739-2cc5-4e9c-be25-f79ee393a010" (UID: "06a79739-2cc5-4e9c-be25-f79ee393a010"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:20.623253 kubelet[2445]: I1101 00:44:20.623228 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a79739-2cc5-4e9c-be25-f79ee393a010-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "06a79739-2cc5-4e9c-be25-f79ee393a010" (UID: "06a79739-2cc5-4e9c-be25-f79ee393a010"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:44:20.623412 kubelet[2445]: I1101 00:44:20.623393 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "06a79739-2cc5-4e9c-be25-f79ee393a010" (UID: "06a79739-2cc5-4e9c-be25-f79ee393a010"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:20.625132 kubelet[2445]: I1101 00:44:20.625101 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "06a79739-2cc5-4e9c-be25-f79ee393a010" (UID: "06a79739-2cc5-4e9c-be25-f79ee393a010"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:20.625226 kubelet[2445]: I1101 00:44:20.625146 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "06a79739-2cc5-4e9c-be25-f79ee393a010" (UID: "06a79739-2cc5-4e9c-be25-f79ee393a010"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:20.625226 kubelet[2445]: I1101 00:44:20.625169 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-hostproc" (OuterVolumeSpecName: "hostproc") pod "06a79739-2cc5-4e9c-be25-f79ee393a010" (UID: "06a79739-2cc5-4e9c-be25-f79ee393a010"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:20.625226 kubelet[2445]: I1101 00:44:20.625186 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "06a79739-2cc5-4e9c-be25-f79ee393a010" (UID: "06a79739-2cc5-4e9c-be25-f79ee393a010"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:20.626554 kubelet[2445]: I1101 00:44:20.626526 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d" (UID: "6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:44:20.626646 kubelet[2445]: I1101 00:44:20.626615 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a79739-2cc5-4e9c-be25-f79ee393a010-kube-api-access-4phdb" (OuterVolumeSpecName: "kube-api-access-4phdb") pod "06a79739-2cc5-4e9c-be25-f79ee393a010" (UID: "06a79739-2cc5-4e9c-be25-f79ee393a010"). InnerVolumeSpecName "kube-api-access-4phdb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:20.628276 kubelet[2445]: I1101 00:44:20.628242 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a79739-2cc5-4e9c-be25-f79ee393a010-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "06a79739-2cc5-4e9c-be25-f79ee393a010" (UID: "06a79739-2cc5-4e9c-be25-f79ee393a010"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:44:20.629582 kubelet[2445]: I1101 00:44:20.629555 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a79739-2cc5-4e9c-be25-f79ee393a010-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "06a79739-2cc5-4e9c-be25-f79ee393a010" (UID: "06a79739-2cc5-4e9c-be25-f79ee393a010"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:20.630860 kubelet[2445]: I1101 00:44:20.630833 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d-kube-api-access-z4wtx" (OuterVolumeSpecName: "kube-api-access-z4wtx") pod "6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d" (UID: "6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d"). InnerVolumeSpecName "kube-api-access-z4wtx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:20.720361 kubelet[2445]: I1101 00:44:20.720306 2445 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/06a79739-2cc5-4e9c-be25-f79ee393a010-clustermesh-secrets\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:20.720361 kubelet[2445]: I1101 00:44:20.720346 2445 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z4wtx\" (UniqueName: \"kubernetes.io/projected/6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d-kube-api-access-z4wtx\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:20.720361 kubelet[2445]: I1101 00:44:20.720369 2445 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-cilium-run\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:20.720672 kubelet[2445]: I1101 00:44:20.720384 2445 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-xtables-lock\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:20.720672 kubelet[2445]: I1101 00:44:20.720398 2445 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/06a79739-2cc5-4e9c-be25-f79ee393a010-hubble-tls\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:20.720672 kubelet[2445]: I1101 00:44:20.720412 2445 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d-cilium-config-path\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:20.720672 kubelet[2445]: I1101 00:44:20.720427 2445 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:20.720672 kubelet[2445]: I1101 00:44:20.720440 2445 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-bpf-maps\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:20.720672 kubelet[2445]: I1101 00:44:20.720453 2445 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-hostproc\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:20.720672 kubelet[2445]: I1101 00:44:20.720468 2445 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-lib-modules\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:20.720672 kubelet[2445]: I1101 00:44:20.720484 2445 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-cni-path\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:20.720929 kubelet[2445]: I1101 00:44:20.720497 2445 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4phdb\" (UniqueName: \"kubernetes.io/projected/06a79739-2cc5-4e9c-be25-f79ee393a010-kube-api-access-4phdb\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:20.720929 kubelet[2445]: I1101 00:44:20.720547 2445 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-cilium-cgroup\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:20.720929 kubelet[2445]: I1101 00:44:20.720566 2445 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-etc-cni-netd\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:20.720929 kubelet[2445]: I1101 00:44:20.720581 2445 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06a79739-2cc5-4e9c-be25-f79ee393a010-cilium-config-path\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:20.720929 kubelet[2445]: I1101 00:44:20.720596 2445 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/06a79739-2cc5-4e9c-be25-f79ee393a010-host-proc-sys-net\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:20.818076 kubelet[2445]: I1101 00:44:20.818035 2445 scope.go:117] "RemoveContainer" containerID="558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789" Nov 1 00:44:20.821536 env[1441]: time="2025-11-01T00:44:20.821496732Z" level=info msg="RemoveContainer for \"558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789\"" Nov 1 00:44:20.826699 systemd[1]: Removed slice kubepods-burstable-pod06a79739_2cc5_4e9c_be25_f79ee393a010.slice. Nov 1 00:44:20.826832 systemd[1]: kubepods-burstable-pod06a79739_2cc5_4e9c_be25_f79ee393a010.slice: Consumed 7.491s CPU time. Nov 1 00:44:20.831521 systemd[1]: Removed slice kubepods-besteffort-pod6c9ab0cf_be14_4a66_b9b5_b1ad73f38f4d.slice. Nov 1 00:44:20.837474 env[1441]: time="2025-11-01T00:44:20.837434897Z" level=info msg="RemoveContainer for \"558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789\" returns successfully" Nov 1 00:44:20.837730 kubelet[2445]: I1101 00:44:20.837710 2445 scope.go:117] "RemoveContainer" containerID="0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6" Nov 1 00:44:20.839242 env[1441]: time="2025-11-01T00:44:20.839213027Z" level=info msg="RemoveContainer for \"0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6\"" Nov 1 00:44:20.851255 env[1441]: time="2025-11-01T00:44:20.851215327Z" level=info msg="RemoveContainer for \"0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6\" returns successfully" Nov 1 00:44:20.851809 kubelet[2445]: I1101 00:44:20.851783 2445 scope.go:117] "RemoveContainer" containerID="e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109" Nov 1 00:44:20.860662 env[1441]: time="2025-11-01T00:44:20.860630183Z" level=info msg="RemoveContainer for \"e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109\"" Nov 1 00:44:20.866760 env[1441]: time="2025-11-01T00:44:20.866725985Z" level=info msg="RemoveContainer for \"e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109\" returns successfully" Nov 1 00:44:20.870914 kubelet[2445]: I1101 00:44:20.869115 2445 scope.go:117] "RemoveContainer" containerID="fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d" Nov 1 00:44:20.871244 env[1441]: time="2025-11-01T00:44:20.870187442Z" level=info msg="RemoveContainer for \"fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d\"" Nov 1 00:44:20.875931 env[1441]: time="2025-11-01T00:44:20.875904538Z" level=info msg="RemoveContainer for \"fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d\" returns successfully" Nov 1 00:44:20.876091 kubelet[2445]: I1101 00:44:20.876066 2445 scope.go:117] "RemoveContainer" containerID="f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249" Nov 1 00:44:20.877021 env[1441]: time="2025-11-01T00:44:20.876971255Z" level=info msg="RemoveContainer for \"f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249\"" Nov 1 00:44:20.883053 env[1441]: time="2025-11-01T00:44:20.883027256Z" level=info msg="RemoveContainer for \"f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249\" returns successfully" Nov 1 00:44:20.883189 kubelet[2445]: I1101 00:44:20.883161 2445 scope.go:117] "RemoveContainer" containerID="558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789" Nov 1 00:44:20.883411 env[1441]: time="2025-11-01T00:44:20.883348362Z" level=error msg="ContainerStatus for \"558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789\": not found" Nov 1 00:44:20.883533 kubelet[2445]: E1101 00:44:20.883509 2445 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789\": not found" containerID="558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789" Nov 1 00:44:20.883609 kubelet[2445]: I1101 00:44:20.883542 2445 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789"} err="failed to get container status \"558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789\": rpc error: code = NotFound desc = an error occurred when try to find container \"558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789\": not found" Nov 1 00:44:20.883609 kubelet[2445]: I1101 00:44:20.883597 2445 scope.go:117] "RemoveContainer" containerID="0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6" Nov 1 00:44:20.883803 env[1441]: time="2025-11-01T00:44:20.883760068Z" level=error msg="ContainerStatus for \"0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6\": not found" Nov 1 00:44:20.883921 kubelet[2445]: E1101 00:44:20.883899 2445 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6\": not found" containerID="0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6" Nov 1 00:44:20.884010 kubelet[2445]: I1101 00:44:20.883926 2445 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6"} err="failed to get container status \"0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6\": rpc error: code = NotFound desc = an error occurred when try to find container \"0568dd929320c2f3dfc3c997e666d2d7f8f6dde8dcfc846810511547e7ae7ba6\": not found" Nov 1 00:44:20.884010 kubelet[2445]: I1101 00:44:20.883947 2445 scope.go:117] "RemoveContainer" containerID="e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109" Nov 1 00:44:20.884184 env[1441]: time="2025-11-01T00:44:20.884137975Z" level=error msg="ContainerStatus for \"e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109\": not found" Nov 1 00:44:20.884303 kubelet[2445]: E1101 00:44:20.884277 2445 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109\": not found" containerID="e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109" Nov 1 00:44:20.884364 kubelet[2445]: I1101 00:44:20.884307 2445 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109"} err="failed to get container status \"e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109\": rpc error: code = NotFound desc = an error occurred when try to find container \"e84bb4815a3157b4628e65bb9b6a1d638af2bc7aa0803008930cb8c408515109\": not found" Nov 1 00:44:20.884364 kubelet[2445]: I1101 00:44:20.884327 2445 scope.go:117] "RemoveContainer" containerID="fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d" Nov 1 00:44:20.884547 env[1441]: time="2025-11-01T00:44:20.884489981Z" level=error msg="ContainerStatus for \"fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d\": not found" Nov 1 00:44:20.884644 kubelet[2445]: E1101 00:44:20.884621 2445 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d\": not found" containerID="fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d" Nov 1 00:44:20.884725 kubelet[2445]: I1101 00:44:20.884646 2445 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d"} err="failed to get container status \"fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d\": rpc error: code = NotFound desc = an error occurred when try to find container \"fadd750e7206533585842cec21028ef873c8ac303bd76dd3a00d97585b5f183d\": not found" Nov 1 00:44:20.884725 kubelet[2445]: I1101 00:44:20.884666 2445 scope.go:117] "RemoveContainer" containerID="f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249" Nov 1 00:44:20.884904 env[1441]: time="2025-11-01T00:44:20.884854187Z" level=error msg="ContainerStatus for \"f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249\": not found" Nov 1 00:44:20.885030 kubelet[2445]: E1101 00:44:20.885006 2445 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249\": not found" containerID="f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249" Nov 1 00:44:20.885117 kubelet[2445]: I1101 00:44:20.885034 2445 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249"} err="failed to get container status \"f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249\": rpc error: code = NotFound desc = an error occurred when try to find container \"f52222974cf76855113b10b963d4056c9949746cbdf54cba82e1ba39d27ab249\": not found" Nov 1 00:44:20.885117 kubelet[2445]: I1101 00:44:20.885052 2445 scope.go:117] "RemoveContainer" containerID="19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6" Nov 1 00:44:20.885926 env[1441]: time="2025-11-01T00:44:20.885901704Z" level=info msg="RemoveContainer for \"19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6\"" Nov 1 00:44:20.891757 env[1441]: time="2025-11-01T00:44:20.891725801Z" level=info msg="RemoveContainer for \"19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6\" returns successfully" Nov 1 00:44:20.891937 kubelet[2445]: I1101 00:44:20.891901 2445 scope.go:117] "RemoveContainer" containerID="19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6" Nov 1 00:44:20.892246 env[1441]: time="2025-11-01T00:44:20.892197009Z" level=error msg="ContainerStatus for \"19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6\": not found" Nov 1 00:44:20.892404 kubelet[2445]: E1101 00:44:20.892384 2445 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6\": not found" containerID="19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6" Nov 1 00:44:20.892479 kubelet[2445]: I1101 00:44:20.892407 2445 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6"} err="failed to get container status \"19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6\": rpc error: code = NotFound desc = an error occurred when try to find container \"19dad436a9d981b60ce9dd84eac31db0821d278d0a2260cc1989fe5db9bb6ad6\": not found" Nov 1 00:44:21.276004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-558f5b25bbeadb7dd3d809e85ea928b36882fccc2f18574291d96587860d8789-rootfs.mount: Deactivated successfully. Nov 1 00:44:21.276132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1533d7d53df3f1a818a7029dbebd019f3fe5236946ac7c9af048491857e3d05f-rootfs.mount: Deactivated successfully. Nov 1 00:44:21.276213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466-rootfs.mount: Deactivated successfully. Nov 1 00:44:21.276284 systemd[1]: var-lib-kubelet-pods-6c9ab0cf\x2dbe14\x2d4a66\x2db9b5\x2db1ad73f38f4d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz4wtx.mount: Deactivated successfully. Nov 1 00:44:21.276363 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ad79025c1deb6ad2777280cdf22f1150068f9aca5d38f84326fdf7d9a39d466-shm.mount: Deactivated successfully. Nov 1 00:44:21.276440 systemd[1]: var-lib-kubelet-pods-06a79739\x2d2cc5\x2d4e9c\x2dbe25\x2df79ee393a010-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4phdb.mount: Deactivated successfully. Nov 1 00:44:21.276519 systemd[1]: var-lib-kubelet-pods-06a79739\x2d2cc5\x2d4e9c\x2dbe25\x2df79ee393a010-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:44:21.276595 systemd[1]: var-lib-kubelet-pods-06a79739\x2d2cc5\x2d4e9c\x2dbe25\x2df79ee393a010-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:44:22.314634 sshd[3987]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:22.318067 systemd[1]: sshd@19-10.200.4.33:22-10.200.16.10:48922.service: Deactivated successfully. Nov 1 00:44:22.318965 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:44:22.319658 systemd-logind[1429]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:44:22.320548 systemd-logind[1429]: Removed session 22. Nov 1 00:44:22.407590 kubelet[2445]: I1101 00:44:22.407512 2445 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06a79739-2cc5-4e9c-be25-f79ee393a010" path="/var/lib/kubelet/pods/06a79739-2cc5-4e9c-be25-f79ee393a010/volumes" Nov 1 00:44:22.408467 kubelet[2445]: I1101 00:44:22.408437 2445 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d" path="/var/lib/kubelet/pods/6c9ab0cf-be14-4a66-b9b5-b1ad73f38f4d/volumes" Nov 1 00:44:22.414438 systemd[1]: Started sshd@20-10.200.4.33:22-10.200.16.10:46806.service. Nov 1 00:44:23.004073 sshd[4156]: Accepted publickey for core from 10.200.16.10 port 46806 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:44:23.005779 sshd[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:23.011171 systemd-logind[1429]: New session 23 of user core. Nov 1 00:44:23.011635 systemd[1]: Started session-23.scope. Nov 1 00:44:23.850374 systemd[1]: Created slice kubepods-burstable-pod4cecbeaa_1b52_4a19_83e5_731acb614446.slice. Nov 1 00:44:23.932272 sshd[4156]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:23.935648 systemd[1]: sshd@20-10.200.4.33:22-10.200.16.10:46806.service: Deactivated successfully. Nov 1 00:44:23.936541 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:44:23.937305 systemd-logind[1429]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:44:23.938181 systemd-logind[1429]: Removed session 23. Nov 1 00:44:23.939781 kubelet[2445]: I1101 00:44:23.939752 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-cilium-cgroup\") pod \"cilium-t8kgs\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " pod="kube-system/cilium-t8kgs" Nov 1 00:44:23.940110 kubelet[2445]: I1101 00:44:23.939817 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-cni-path\") pod \"cilium-t8kgs\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " pod="kube-system/cilium-t8kgs" Nov 1 00:44:23.940110 kubelet[2445]: I1101 00:44:23.939854 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4cecbeaa-1b52-4a19-83e5-731acb614446-clustermesh-secrets\") pod \"cilium-t8kgs\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " pod="kube-system/cilium-t8kgs" Nov 1 00:44:23.940110 kubelet[2445]: I1101 00:44:23.939884 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cecbeaa-1b52-4a19-83e5-731acb614446-cilium-config-path\") pod \"cilium-t8kgs\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " pod="kube-system/cilium-t8kgs" Nov 1 00:44:23.940110 kubelet[2445]: I1101 00:44:23.939911 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4cecbeaa-1b52-4a19-83e5-731acb614446-cilium-ipsec-secrets\") pod \"cilium-t8kgs\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " pod="kube-system/cilium-t8kgs" Nov 1 00:44:23.940110 kubelet[2445]: I1101 00:44:23.939933 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-hostproc\") pod \"cilium-t8kgs\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " pod="kube-system/cilium-t8kgs" Nov 1 00:44:23.940110 kubelet[2445]: I1101 00:44:23.939953 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-lib-modules\") pod \"cilium-t8kgs\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " pod="kube-system/cilium-t8kgs" Nov 1 00:44:23.940360 kubelet[2445]: I1101 00:44:23.940082 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-host-proc-sys-kernel\") pod \"cilium-t8kgs\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " pod="kube-system/cilium-t8kgs" Nov 1 00:44:23.940360 kubelet[2445]: I1101 00:44:23.940115 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4cecbeaa-1b52-4a19-83e5-731acb614446-hubble-tls\") pod \"cilium-t8kgs\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " pod="kube-system/cilium-t8kgs" Nov 1 00:44:23.940360 kubelet[2445]: I1101 00:44:23.940145 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmdtc\" (UniqueName: \"kubernetes.io/projected/4cecbeaa-1b52-4a19-83e5-731acb614446-kube-api-access-qmdtc\") pod \"cilium-t8kgs\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " pod="kube-system/cilium-t8kgs" Nov 1 00:44:23.940360 kubelet[2445]: I1101 00:44:23.940171 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-bpf-maps\") pod \"cilium-t8kgs\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " pod="kube-system/cilium-t8kgs" Nov 1 00:44:23.940360 kubelet[2445]: I1101 00:44:23.940201 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-cilium-run\") pod \"cilium-t8kgs\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " pod="kube-system/cilium-t8kgs" Nov 1 00:44:23.940360 kubelet[2445]: I1101 00:44:23.940222 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-xtables-lock\") pod \"cilium-t8kgs\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " pod="kube-system/cilium-t8kgs" Nov 1 00:44:23.940511 kubelet[2445]: I1101 00:44:23.940242 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-host-proc-sys-net\") pod \"cilium-t8kgs\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " pod="kube-system/cilium-t8kgs" Nov 1 00:44:23.940511 kubelet[2445]: I1101 00:44:23.940268 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-etc-cni-netd\") pod \"cilium-t8kgs\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " pod="kube-system/cilium-t8kgs" Nov 1 00:44:24.031300 systemd[1]: Started sshd@21-10.200.4.33:22-10.200.16.10:46822.service. Nov 1 00:44:24.157609 env[1441]: time="2025-11-01T00:44:24.157208041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t8kgs,Uid:4cecbeaa-1b52-4a19-83e5-731acb614446,Namespace:kube-system,Attempt:0,}" Nov 1 00:44:24.185500 env[1441]: time="2025-11-01T00:44:24.185432597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:24.185678 env[1441]: time="2025-11-01T00:44:24.185468398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:24.185678 env[1441]: time="2025-11-01T00:44:24.185483998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:24.185821 env[1441]: time="2025-11-01T00:44:24.185667301Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/14c8c1daeab2f54eab095e7bfefbefd4d8e544b4c5043bdeda6be2482852aabd pid=4180 runtime=io.containerd.runc.v2 Nov 1 00:44:24.198162 systemd[1]: Started cri-containerd-14c8c1daeab2f54eab095e7bfefbefd4d8e544b4c5043bdeda6be2482852aabd.scope. Nov 1 00:44:24.226858 env[1441]: time="2025-11-01T00:44:24.226794666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t8kgs,Uid:4cecbeaa-1b52-4a19-83e5-731acb614446,Namespace:kube-system,Attempt:0,} returns sandbox id \"14c8c1daeab2f54eab095e7bfefbefd4d8e544b4c5043bdeda6be2482852aabd\"" Nov 1 00:44:24.239204 env[1441]: time="2025-11-01T00:44:24.239163565Z" level=info msg="CreateContainer within sandbox \"14c8c1daeab2f54eab095e7bfefbefd4d8e544b4c5043bdeda6be2482852aabd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:44:24.261800 env[1441]: time="2025-11-01T00:44:24.261729230Z" level=info msg="CreateContainer within sandbox \"14c8c1daeab2f54eab095e7bfefbefd4d8e544b4c5043bdeda6be2482852aabd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd\"" Nov 1 00:44:24.262689 env[1441]: time="2025-11-01T00:44:24.262641045Z" level=info msg="StartContainer for \"9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd\"" Nov 1 00:44:24.278625 systemd[1]: Started cri-containerd-9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd.scope. Nov 1 00:44:24.293284 systemd[1]: cri-containerd-9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd.scope: Deactivated successfully. Nov 1 00:44:24.359371 env[1441]: time="2025-11-01T00:44:24.359312706Z" level=info msg="shim disconnected" id=9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd Nov 1 00:44:24.359371 env[1441]: time="2025-11-01T00:44:24.359374007Z" level=warning msg="cleaning up after shim disconnected" id=9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd namespace=k8s.io Nov 1 00:44:24.359702 env[1441]: time="2025-11-01T00:44:24.359386008Z" level=info msg="cleaning up dead shim" Nov 1 00:44:24.367379 env[1441]: time="2025-11-01T00:44:24.367340536Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4240 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T00:44:24Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Nov 1 00:44:24.367694 env[1441]: time="2025-11-01T00:44:24.367598140Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Nov 1 00:44:24.371111 env[1441]: time="2025-11-01T00:44:24.371063696Z" level=error msg="Failed to pipe stdout of container \"9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd\"" error="reading from a closed fifo" Nov 1 00:44:24.371298 env[1441]: time="2025-11-01T00:44:24.371251199Z" level=error msg="Failed to pipe stderr of container \"9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd\"" error="reading from a closed fifo" Nov 1 00:44:24.375820 env[1441]: time="2025-11-01T00:44:24.375773572Z" level=error msg="StartContainer for \"9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Nov 1 00:44:24.376062 kubelet[2445]: E1101 00:44:24.376026 2445 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd" Nov 1 00:44:24.377383 kubelet[2445]: E1101 00:44:24.376480 2445 kuberuntime_manager.go:1358] "Unhandled Error" err=< Nov 1 00:44:24.377383 kubelet[2445]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Nov 1 00:44:24.377383 kubelet[2445]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Nov 1 00:44:24.377383 kubelet[2445]: rm /hostbin/cilium-mount Nov 1 00:44:24.377547 kubelet[2445]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmdtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-t8kgs_kube-system(4cecbeaa-1b52-4a19-83e5-731acb614446): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Nov 1 00:44:24.377547 kubelet[2445]: > logger="UnhandledError" Nov 1 00:44:24.377715 kubelet[2445]: E1101 00:44:24.377636 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t8kgs" podUID="4cecbeaa-1b52-4a19-83e5-731acb614446" Nov 1 00:44:24.395989 kubelet[2445]: I1101 00:44:24.395894 2445 setters.go:618] "Node became not ready" node="ci-3510.3.8-n-bb3ab03ab7" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T00:44:24Z","lastTransitionTime":"2025-11-01T00:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 00:44:24.622106 sshd[4166]: Accepted publickey for core from 10.200.16.10 port 46822 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:44:24.623675 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:24.628835 systemd-logind[1429]: New session 24 of user core. Nov 1 00:44:24.629378 systemd[1]: Started session-24.scope. Nov 1 00:44:24.838075 env[1441]: time="2025-11-01T00:44:24.838027640Z" level=info msg="CreateContainer within sandbox \"14c8c1daeab2f54eab095e7bfefbefd4d8e544b4c5043bdeda6be2482852aabd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Nov 1 00:44:24.862717 env[1441]: time="2025-11-01T00:44:24.862669538Z" level=info msg="CreateContainer within sandbox \"14c8c1daeab2f54eab095e7bfefbefd4d8e544b4c5043bdeda6be2482852aabd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"9c6acdf0b90223bf734458876b69ea6829347c6bc97379c3b689b5a79a5de5e8\"" Nov 1 00:44:24.863437 env[1441]: time="2025-11-01T00:44:24.863401950Z" level=info msg="StartContainer for \"9c6acdf0b90223bf734458876b69ea6829347c6bc97379c3b689b5a79a5de5e8\"" Nov 1 00:44:24.879407 systemd[1]: Started cri-containerd-9c6acdf0b90223bf734458876b69ea6829347c6bc97379c3b689b5a79a5de5e8.scope. Nov 1 00:44:24.891562 systemd[1]: cri-containerd-9c6acdf0b90223bf734458876b69ea6829347c6bc97379c3b689b5a79a5de5e8.scope: Deactivated successfully. Nov 1 00:44:24.907147 env[1441]: time="2025-11-01T00:44:24.907093156Z" level=info msg="shim disconnected" id=9c6acdf0b90223bf734458876b69ea6829347c6bc97379c3b689b5a79a5de5e8 Nov 1 00:44:24.907350 env[1441]: time="2025-11-01T00:44:24.907149957Z" level=warning msg="cleaning up after shim disconnected" id=9c6acdf0b90223bf734458876b69ea6829347c6bc97379c3b689b5a79a5de5e8 namespace=k8s.io Nov 1 00:44:24.907350 env[1441]: time="2025-11-01T00:44:24.907163557Z" level=info msg="cleaning up dead shim" Nov 1 00:44:24.914663 env[1441]: time="2025-11-01T00:44:24.914624977Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4280 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T00:44:24Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9c6acdf0b90223bf734458876b69ea6829347c6bc97379c3b689b5a79a5de5e8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Nov 1 00:44:24.914909 env[1441]: time="2025-11-01T00:44:24.914856581Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Nov 1 00:44:24.915164 env[1441]: time="2025-11-01T00:44:24.915128886Z" level=error msg="Failed to pipe stderr of container \"9c6acdf0b90223bf734458876b69ea6829347c6bc97379c3b689b5a79a5de5e8\"" error="reading from a closed fifo" Nov 1 00:44:24.918221 env[1441]: time="2025-11-01T00:44:24.918172935Z" level=error msg="Failed to pipe stdout of container \"9c6acdf0b90223bf734458876b69ea6829347c6bc97379c3b689b5a79a5de5e8\"" error="reading from a closed fifo" Nov 1 00:44:24.922457 env[1441]: time="2025-11-01T00:44:24.922410103Z" level=error msg="StartContainer for \"9c6acdf0b90223bf734458876b69ea6829347c6bc97379c3b689b5a79a5de5e8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Nov 1 00:44:24.922660 kubelet[2445]: E1101 00:44:24.922614 2445 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9c6acdf0b90223bf734458876b69ea6829347c6bc97379c3b689b5a79a5de5e8" Nov 1 00:44:24.923869 kubelet[2445]: E1101 00:44:24.923168 2445 kuberuntime_manager.go:1358] "Unhandled Error" err=< Nov 1 00:44:24.923869 kubelet[2445]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Nov 1 00:44:24.923869 kubelet[2445]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Nov 1 00:44:24.923869 kubelet[2445]: rm /hostbin/cilium-mount Nov 1 00:44:24.923869 kubelet[2445]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmdtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-t8kgs_kube-system(4cecbeaa-1b52-4a19-83e5-731acb614446): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Nov 1 00:44:24.923869 kubelet[2445]: > logger="UnhandledError" Nov 1 00:44:24.924364 kubelet[2445]: E1101 00:44:24.924331 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t8kgs" podUID="4cecbeaa-1b52-4a19-83e5-731acb614446" Nov 1 00:44:25.126792 sshd[4166]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:25.129992 systemd[1]: sshd@21-10.200.4.33:22-10.200.16.10:46822.service: Deactivated successfully. Nov 1 00:44:25.131832 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:44:25.132869 systemd-logind[1429]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:44:25.133855 systemd-logind[1429]: Removed session 24. Nov 1 00:44:25.229086 systemd[1]: Started sshd@22-10.200.4.33:22-10.200.16.10:46830.service. Nov 1 00:44:25.475757 kubelet[2445]: E1101 00:44:25.475619 2445 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:44:25.823701 sshd[4301]: Accepted publickey for core from 10.200.16.10 port 46830 ssh2: RSA SHA256:0Lz+e65NmjcLEWSU8nZWVjcdNmuD7VGwfZr523Bu77Q Nov 1 00:44:25.825394 sshd[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:25.830851 systemd[1]: Started session-25.scope. Nov 1 00:44:25.831447 systemd-logind[1429]: New session 25 of user core. Nov 1 00:44:25.839307 kubelet[2445]: I1101 00:44:25.839276 2445 scope.go:117] "RemoveContainer" containerID="9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd" Nov 1 00:44:25.840538 env[1441]: time="2025-11-01T00:44:25.840498135Z" level=info msg="StopPodSandbox for \"14c8c1daeab2f54eab095e7bfefbefd4d8e544b4c5043bdeda6be2482852aabd\"" Nov 1 00:44:25.843479 env[1441]: time="2025-11-01T00:44:25.840568736Z" level=info msg="Container to stop \"9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:25.843479 env[1441]: time="2025-11-01T00:44:25.840588736Z" level=info msg="Container to stop \"9c6acdf0b90223bf734458876b69ea6829347c6bc97379c3b689b5a79a5de5e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:25.844134 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-14c8c1daeab2f54eab095e7bfefbefd4d8e544b4c5043bdeda6be2482852aabd-shm.mount: Deactivated successfully. Nov 1 00:44:25.848723 env[1441]: time="2025-11-01T00:44:25.848684366Z" level=info msg="RemoveContainer for \"9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd\"" Nov 1 00:44:25.856290 env[1441]: time="2025-11-01T00:44:25.856260388Z" level=info msg="RemoveContainer for \"9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd\" returns successfully" Nov 1 00:44:25.858340 systemd[1]: cri-containerd-14c8c1daeab2f54eab095e7bfefbefd4d8e544b4c5043bdeda6be2482852aabd.scope: Deactivated successfully. Nov 1 00:44:25.884756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14c8c1daeab2f54eab095e7bfefbefd4d8e544b4c5043bdeda6be2482852aabd-rootfs.mount: Deactivated successfully. Nov 1 00:44:25.901709 env[1441]: time="2025-11-01T00:44:25.901660216Z" level=info msg="shim disconnected" id=14c8c1daeab2f54eab095e7bfefbefd4d8e544b4c5043bdeda6be2482852aabd Nov 1 00:44:25.901895 env[1441]: time="2025-11-01T00:44:25.901714917Z" level=warning msg="cleaning up after shim disconnected" id=14c8c1daeab2f54eab095e7bfefbefd4d8e544b4c5043bdeda6be2482852aabd namespace=k8s.io Nov 1 00:44:25.901895 env[1441]: time="2025-11-01T00:44:25.901727317Z" level=info msg="cleaning up dead shim" Nov 1 00:44:25.910196 env[1441]: time="2025-11-01T00:44:25.910161452Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4323 runtime=io.containerd.runc.v2\n" Nov 1 00:44:25.910495 env[1441]: time="2025-11-01T00:44:25.910462057Z" level=info msg="TearDown network for sandbox \"14c8c1daeab2f54eab095e7bfefbefd4d8e544b4c5043bdeda6be2482852aabd\" successfully" Nov 1 00:44:25.910575 env[1441]: time="2025-11-01T00:44:25.910494157Z" level=info msg="StopPodSandbox for \"14c8c1daeab2f54eab095e7bfefbefd4d8e544b4c5043bdeda6be2482852aabd\" returns successfully" Nov 1 00:44:26.057564 kubelet[2445]: I1101 00:44:26.057475 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-cilium-cgroup\") pod \"4cecbeaa-1b52-4a19-83e5-731acb614446\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " Nov 1 00:44:26.057564 kubelet[2445]: I1101 00:44:26.057557 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-hostproc\") pod \"4cecbeaa-1b52-4a19-83e5-731acb614446\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " Nov 1 00:44:26.057871 kubelet[2445]: I1101 00:44:26.057592 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4cecbeaa-1b52-4a19-83e5-731acb614446-hubble-tls\") pod \"4cecbeaa-1b52-4a19-83e5-731acb614446\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " Nov 1 00:44:26.057871 kubelet[2445]: I1101 00:44:26.057619 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmdtc\" (UniqueName: \"kubernetes.io/projected/4cecbeaa-1b52-4a19-83e5-731acb614446-kube-api-access-qmdtc\") pod \"4cecbeaa-1b52-4a19-83e5-731acb614446\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " Nov 1 00:44:26.057871 kubelet[2445]: I1101 00:44:26.057642 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-bpf-maps\") pod \"4cecbeaa-1b52-4a19-83e5-731acb614446\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " Nov 1 00:44:26.057871 kubelet[2445]: I1101 00:44:26.057668 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-host-proc-sys-net\") pod \"4cecbeaa-1b52-4a19-83e5-731acb614446\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " Nov 1 00:44:26.057871 kubelet[2445]: I1101 00:44:26.057699 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cecbeaa-1b52-4a19-83e5-731acb614446-cilium-config-path\") pod \"4cecbeaa-1b52-4a19-83e5-731acb614446\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " Nov 1 00:44:26.057871 kubelet[2445]: I1101 00:44:26.057723 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-cni-path\") pod \"4cecbeaa-1b52-4a19-83e5-731acb614446\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " Nov 1 00:44:26.057871 kubelet[2445]: I1101 00:44:26.057745 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-cilium-run\") pod \"4cecbeaa-1b52-4a19-83e5-731acb614446\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " Nov 1 00:44:26.057871 kubelet[2445]: I1101 00:44:26.057769 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-xtables-lock\") pod \"4cecbeaa-1b52-4a19-83e5-731acb614446\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " Nov 1 00:44:26.057871 kubelet[2445]: I1101 00:44:26.057802 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4cecbeaa-1b52-4a19-83e5-731acb614446-clustermesh-secrets\") pod \"4cecbeaa-1b52-4a19-83e5-731acb614446\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " Nov 1 00:44:26.057871 kubelet[2445]: I1101 00:44:26.057833 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-lib-modules\") pod \"4cecbeaa-1b52-4a19-83e5-731acb614446\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " Nov 1 00:44:26.057871 kubelet[2445]: I1101 00:44:26.057864 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-etc-cni-netd\") pod \"4cecbeaa-1b52-4a19-83e5-731acb614446\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " Nov 1 00:44:26.058508 kubelet[2445]: I1101 00:44:26.057891 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-host-proc-sys-kernel\") pod \"4cecbeaa-1b52-4a19-83e5-731acb614446\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " Nov 1 00:44:26.058508 kubelet[2445]: I1101 00:44:26.057926 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4cecbeaa-1b52-4a19-83e5-731acb614446-cilium-ipsec-secrets\") pod \"4cecbeaa-1b52-4a19-83e5-731acb614446\" (UID: \"4cecbeaa-1b52-4a19-83e5-731acb614446\") " Nov 1 00:44:26.059414 kubelet[2445]: I1101 00:44:26.058771 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-cni-path" (OuterVolumeSpecName: "cni-path") pod "4cecbeaa-1b52-4a19-83e5-731acb614446" (UID: "4cecbeaa-1b52-4a19-83e5-731acb614446"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:26.059414 kubelet[2445]: I1101 00:44:26.058826 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4cecbeaa-1b52-4a19-83e5-731acb614446" (UID: "4cecbeaa-1b52-4a19-83e5-731acb614446"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:26.059414 kubelet[2445]: I1101 00:44:26.058853 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4cecbeaa-1b52-4a19-83e5-731acb614446" (UID: "4cecbeaa-1b52-4a19-83e5-731acb614446"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:26.060123 kubelet[2445]: I1101 00:44:26.059737 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4cecbeaa-1b52-4a19-83e5-731acb614446" (UID: "4cecbeaa-1b52-4a19-83e5-731acb614446"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:26.060123 kubelet[2445]: I1101 00:44:26.059795 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-hostproc" (OuterVolumeSpecName: "hostproc") pod "4cecbeaa-1b52-4a19-83e5-731acb614446" (UID: "4cecbeaa-1b52-4a19-83e5-731acb614446"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:26.065282 systemd[1]: var-lib-kubelet-pods-4cecbeaa\x2d1b52\x2d4a19\x2d83e5\x2d731acb614446-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Nov 1 00:44:26.066735 kubelet[2445]: I1101 00:44:26.066705 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4cecbeaa-1b52-4a19-83e5-731acb614446" (UID: "4cecbeaa-1b52-4a19-83e5-731acb614446"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:26.066827 kubelet[2445]: I1101 00:44:26.066749 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4cecbeaa-1b52-4a19-83e5-731acb614446" (UID: "4cecbeaa-1b52-4a19-83e5-731acb614446"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:26.066827 kubelet[2445]: I1101 00:44:26.066770 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4cecbeaa-1b52-4a19-83e5-731acb614446" (UID: "4cecbeaa-1b52-4a19-83e5-731acb614446"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:26.070892 systemd[1]: var-lib-kubelet-pods-4cecbeaa\x2d1b52\x2d4a19\x2d83e5\x2d731acb614446-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:44:26.072069 kubelet[2445]: I1101 00:44:26.072045 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cecbeaa-1b52-4a19-83e5-731acb614446-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4cecbeaa-1b52-4a19-83e5-731acb614446" (UID: "4cecbeaa-1b52-4a19-83e5-731acb614446"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:44:26.072200 kubelet[2445]: I1101 00:44:26.072183 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4cecbeaa-1b52-4a19-83e5-731acb614446" (UID: "4cecbeaa-1b52-4a19-83e5-731acb614446"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:26.072368 kubelet[2445]: I1101 00:44:26.072342 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cecbeaa-1b52-4a19-83e5-731acb614446-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4cecbeaa-1b52-4a19-83e5-731acb614446" (UID: "4cecbeaa-1b52-4a19-83e5-731acb614446"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:26.072488 kubelet[2445]: I1101 00:44:26.072359 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4cecbeaa-1b52-4a19-83e5-731acb614446" (UID: "4cecbeaa-1b52-4a19-83e5-731acb614446"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:26.072835 kubelet[2445]: I1101 00:44:26.072813 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cecbeaa-1b52-4a19-83e5-731acb614446-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4cecbeaa-1b52-4a19-83e5-731acb614446" (UID: "4cecbeaa-1b52-4a19-83e5-731acb614446"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:44:26.077072 systemd[1]: var-lib-kubelet-pods-4cecbeaa\x2d1b52\x2d4a19\x2d83e5\x2d731acb614446-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqmdtc.mount: Deactivated successfully. Nov 1 00:44:26.081690 kubelet[2445]: I1101 00:44:26.081667 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cecbeaa-1b52-4a19-83e5-731acb614446-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4cecbeaa-1b52-4a19-83e5-731acb614446" (UID: "4cecbeaa-1b52-4a19-83e5-731acb614446"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:44:26.081836 systemd[1]: var-lib-kubelet-pods-4cecbeaa\x2d1b52\x2d4a19\x2d83e5\x2d731acb614446-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:44:26.082048 kubelet[2445]: I1101 00:44:26.082017 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cecbeaa-1b52-4a19-83e5-731acb614446-kube-api-access-qmdtc" (OuterVolumeSpecName: "kube-api-access-qmdtc") pod "4cecbeaa-1b52-4a19-83e5-731acb614446" (UID: "4cecbeaa-1b52-4a19-83e5-731acb614446"). InnerVolumeSpecName "kube-api-access-qmdtc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:26.158346 kubelet[2445]: I1101 00:44:26.158303 2445 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4cecbeaa-1b52-4a19-83e5-731acb614446-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:26.158346 kubelet[2445]: I1101 00:44:26.158334 2445 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-cilium-cgroup\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:26.158346 kubelet[2445]: I1101 00:44:26.158348 2445 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-hostproc\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:26.158346 kubelet[2445]: I1101 00:44:26.158358 2445 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4cecbeaa-1b52-4a19-83e5-731acb614446-hubble-tls\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:26.158651 kubelet[2445]: I1101 00:44:26.158369 2445 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qmdtc\" (UniqueName: \"kubernetes.io/projected/4cecbeaa-1b52-4a19-83e5-731acb614446-kube-api-access-qmdtc\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:26.158651 kubelet[2445]: I1101 00:44:26.158380 2445 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-bpf-maps\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:26.158651 kubelet[2445]: I1101 00:44:26.158395 2445 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-host-proc-sys-net\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:26.158651 kubelet[2445]: I1101 00:44:26.158406 2445 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cecbeaa-1b52-4a19-83e5-731acb614446-cilium-config-path\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:26.158651 kubelet[2445]: I1101 00:44:26.158416 2445 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-cni-path\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:26.158651 kubelet[2445]: I1101 00:44:26.158431 2445 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-cilium-run\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:26.158651 kubelet[2445]: I1101 00:44:26.158442 2445 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-xtables-lock\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:26.158651 kubelet[2445]: I1101 00:44:26.158452 2445 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4cecbeaa-1b52-4a19-83e5-731acb614446-clustermesh-secrets\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:26.158651 kubelet[2445]: I1101 00:44:26.158463 2445 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-lib-modules\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:26.158651 kubelet[2445]: I1101 00:44:26.158474 2445 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-etc-cni-netd\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:26.158651 kubelet[2445]: I1101 00:44:26.158485 2445 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4cecbeaa-1b52-4a19-83e5-731acb614446-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-bb3ab03ab7\" DevicePath \"\"" Nov 1 00:44:26.412279 systemd[1]: Removed slice kubepods-burstable-pod4cecbeaa_1b52_4a19_83e5_731acb614446.slice. Nov 1 00:44:26.842951 kubelet[2445]: I1101 00:44:26.842898 2445 scope.go:117] "RemoveContainer" containerID="9c6acdf0b90223bf734458876b69ea6829347c6bc97379c3b689b5a79a5de5e8" Nov 1 00:44:26.845710 env[1441]: time="2025-11-01T00:44:26.844830942Z" level=info msg="RemoveContainer for \"9c6acdf0b90223bf734458876b69ea6829347c6bc97379c3b689b5a79a5de5e8\"" Nov 1 00:44:26.851338 env[1441]: time="2025-11-01T00:44:26.851294645Z" level=info msg="RemoveContainer for \"9c6acdf0b90223bf734458876b69ea6829347c6bc97379c3b689b5a79a5de5e8\" returns successfully" Nov 1 00:44:26.901002 systemd[1]: Created slice kubepods-burstable-pod5aaeeec4_ad86_48e3_862e_482baee6687e.slice. Nov 1 00:44:26.964856 kubelet[2445]: I1101 00:44:26.964819 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5aaeeec4-ad86-48e3-862e-482baee6687e-bpf-maps\") pod \"cilium-mw6d2\" (UID: \"5aaeeec4-ad86-48e3-862e-482baee6687e\") " pod="kube-system/cilium-mw6d2" Nov 1 00:44:26.964856 kubelet[2445]: I1101 00:44:26.964858 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5aaeeec4-ad86-48e3-862e-482baee6687e-xtables-lock\") pod \"cilium-mw6d2\" (UID: \"5aaeeec4-ad86-48e3-862e-482baee6687e\") " pod="kube-system/cilium-mw6d2" Nov 1 00:44:26.965141 kubelet[2445]: I1101 00:44:26.964887 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9mgd\" (UniqueName: \"kubernetes.io/projected/5aaeeec4-ad86-48e3-862e-482baee6687e-kube-api-access-x9mgd\") pod \"cilium-mw6d2\" (UID: \"5aaeeec4-ad86-48e3-862e-482baee6687e\") " pod="kube-system/cilium-mw6d2" Nov 1 00:44:26.965141 kubelet[2445]: I1101 00:44:26.964911 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5aaeeec4-ad86-48e3-862e-482baee6687e-cni-path\") pod \"cilium-mw6d2\" (UID: \"5aaeeec4-ad86-48e3-862e-482baee6687e\") " pod="kube-system/cilium-mw6d2" Nov 1 00:44:26.965141 kubelet[2445]: I1101 00:44:26.964931 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5aaeeec4-ad86-48e3-862e-482baee6687e-cilium-ipsec-secrets\") pod \"cilium-mw6d2\" (UID: \"5aaeeec4-ad86-48e3-862e-482baee6687e\") " pod="kube-system/cilium-mw6d2" Nov 1 00:44:26.965141 kubelet[2445]: I1101 00:44:26.964951 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5aaeeec4-ad86-48e3-862e-482baee6687e-host-proc-sys-net\") pod \"cilium-mw6d2\" (UID: \"5aaeeec4-ad86-48e3-862e-482baee6687e\") " pod="kube-system/cilium-mw6d2" Nov 1 00:44:26.965141 kubelet[2445]: I1101 00:44:26.964984 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5aaeeec4-ad86-48e3-862e-482baee6687e-hubble-tls\") pod \"cilium-mw6d2\" (UID: \"5aaeeec4-ad86-48e3-862e-482baee6687e\") " pod="kube-system/cilium-mw6d2" Nov 1 00:44:26.965141 kubelet[2445]: I1101 00:44:26.965005 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5aaeeec4-ad86-48e3-862e-482baee6687e-cilium-run\") pod \"cilium-mw6d2\" (UID: \"5aaeeec4-ad86-48e3-862e-482baee6687e\") " pod="kube-system/cilium-mw6d2" Nov 1 00:44:26.965141 kubelet[2445]: I1101 00:44:26.965029 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5aaeeec4-ad86-48e3-862e-482baee6687e-cilium-cgroup\") pod \"cilium-mw6d2\" (UID: \"5aaeeec4-ad86-48e3-862e-482baee6687e\") " pod="kube-system/cilium-mw6d2" Nov 1 00:44:26.965141 kubelet[2445]: I1101 00:44:26.965051 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5aaeeec4-ad86-48e3-862e-482baee6687e-etc-cni-netd\") pod \"cilium-mw6d2\" (UID: \"5aaeeec4-ad86-48e3-862e-482baee6687e\") " pod="kube-system/cilium-mw6d2" Nov 1 00:44:26.965141 kubelet[2445]: I1101 00:44:26.965074 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5aaeeec4-ad86-48e3-862e-482baee6687e-cilium-config-path\") pod \"cilium-mw6d2\" (UID: \"5aaeeec4-ad86-48e3-862e-482baee6687e\") " pod="kube-system/cilium-mw6d2" Nov 1 00:44:26.965141 kubelet[2445]: I1101 00:44:26.965099 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5aaeeec4-ad86-48e3-862e-482baee6687e-host-proc-sys-kernel\") pod \"cilium-mw6d2\" (UID: \"5aaeeec4-ad86-48e3-862e-482baee6687e\") " pod="kube-system/cilium-mw6d2" Nov 1 00:44:26.965141 kubelet[2445]: I1101 00:44:26.965131 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5aaeeec4-ad86-48e3-862e-482baee6687e-hostproc\") pod \"cilium-mw6d2\" (UID: \"5aaeeec4-ad86-48e3-862e-482baee6687e\") " pod="kube-system/cilium-mw6d2" Nov 1 00:44:26.965528 kubelet[2445]: I1101 00:44:26.965155 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5aaeeec4-ad86-48e3-862e-482baee6687e-lib-modules\") pod \"cilium-mw6d2\" (UID: \"5aaeeec4-ad86-48e3-862e-482baee6687e\") " pod="kube-system/cilium-mw6d2" Nov 1 00:44:26.965528 kubelet[2445]: I1101 00:44:26.965180 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5aaeeec4-ad86-48e3-862e-482baee6687e-clustermesh-secrets\") pod \"cilium-mw6d2\" (UID: \"5aaeeec4-ad86-48e3-862e-482baee6687e\") " pod="kube-system/cilium-mw6d2" Nov 1 00:44:27.208244 env[1441]: time="2025-11-01T00:44:27.207674594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mw6d2,Uid:5aaeeec4-ad86-48e3-862e-482baee6687e,Namespace:kube-system,Attempt:0,}" Nov 1 00:44:27.237966 env[1441]: time="2025-11-01T00:44:27.237902672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:27.237966 env[1441]: time="2025-11-01T00:44:27.237939872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:27.238177 env[1441]: time="2025-11-01T00:44:27.238141975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:27.238402 env[1441]: time="2025-11-01T00:44:27.238361779Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/56e1c689be583d7ab400ffb457db5fe90b3d957eae3a57f238b30bb91aa37714 pid=4359 runtime=io.containerd.runc.v2 Nov 1 00:44:27.250691 systemd[1]: Started cri-containerd-56e1c689be583d7ab400ffb457db5fe90b3d957eae3a57f238b30bb91aa37714.scope. Nov 1 00:44:27.280684 env[1441]: time="2025-11-01T00:44:27.279881135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mw6d2,Uid:5aaeeec4-ad86-48e3-862e-482baee6687e,Namespace:kube-system,Attempt:0,} returns sandbox id \"56e1c689be583d7ab400ffb457db5fe90b3d957eae3a57f238b30bb91aa37714\"" Nov 1 00:44:27.288872 env[1441]: time="2025-11-01T00:44:27.288842377Z" level=info msg="CreateContainer within sandbox \"56e1c689be583d7ab400ffb457db5fe90b3d957eae3a57f238b30bb91aa37714\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:44:27.318057 env[1441]: time="2025-11-01T00:44:27.318015038Z" level=info msg="CreateContainer within sandbox \"56e1c689be583d7ab400ffb457db5fe90b3d957eae3a57f238b30bb91aa37714\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"864ec60c778f6864b01b07588e0a8ac176cb3d9730f51ccccc31e0a5f332838d\"" Nov 1 00:44:27.319860 env[1441]: time="2025-11-01T00:44:27.318757849Z" level=info msg="StartContainer for \"864ec60c778f6864b01b07588e0a8ac176cb3d9730f51ccccc31e0a5f332838d\"" Nov 1 00:44:27.335450 systemd[1]: Started cri-containerd-864ec60c778f6864b01b07588e0a8ac176cb3d9730f51ccccc31e0a5f332838d.scope. Nov 1 00:44:27.368008 env[1441]: time="2025-11-01T00:44:27.367722823Z" level=info msg="StartContainer for \"864ec60c778f6864b01b07588e0a8ac176cb3d9730f51ccccc31e0a5f332838d\" returns successfully" Nov 1 00:44:27.371370 systemd[1]: cri-containerd-864ec60c778f6864b01b07588e0a8ac176cb3d9730f51ccccc31e0a5f332838d.scope: Deactivated successfully. Nov 1 00:44:27.412675 env[1441]: time="2025-11-01T00:44:27.412624433Z" level=info msg="shim disconnected" id=864ec60c778f6864b01b07588e0a8ac176cb3d9730f51ccccc31e0a5f332838d Nov 1 00:44:27.412675 env[1441]: time="2025-11-01T00:44:27.412666134Z" level=warning msg="cleaning up after shim disconnected" id=864ec60c778f6864b01b07588e0a8ac176cb3d9730f51ccccc31e0a5f332838d namespace=k8s.io Nov 1 00:44:27.412675 env[1441]: time="2025-11-01T00:44:27.412678234Z" level=info msg="cleaning up dead shim" Nov 1 00:44:27.420489 env[1441]: time="2025-11-01T00:44:27.420449257Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4441 runtime=io.containerd.runc.v2\n" Nov 1 00:44:27.464308 kubelet[2445]: W1101 00:44:27.464198 2445 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cecbeaa_1b52_4a19_83e5_731acb614446.slice/cri-containerd-9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd.scope WatchSource:0}: container "9b44118efca2cef54a5d25219eaa598cb73a6bb3514e6c929411404c92d956bd" in namespace "k8s.io": not found Nov 1 00:44:27.858092 env[1441]: time="2025-11-01T00:44:27.858042672Z" level=info msg="CreateContainer within sandbox \"56e1c689be583d7ab400ffb457db5fe90b3d957eae3a57f238b30bb91aa37714\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:44:27.885550 env[1441]: time="2025-11-01T00:44:27.885508906Z" level=info msg="CreateContainer within sandbox \"56e1c689be583d7ab400ffb457db5fe90b3d957eae3a57f238b30bb91aa37714\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"90229138344bbe63fda92263fb51d1db861173c35ee5f16187b6f33644f62f95\"" Nov 1 00:44:27.886872 env[1441]: time="2025-11-01T00:44:27.886059615Z" level=info msg="StartContainer for \"90229138344bbe63fda92263fb51d1db861173c35ee5f16187b6f33644f62f95\"" Nov 1 00:44:27.902540 systemd[1]: Started cri-containerd-90229138344bbe63fda92263fb51d1db861173c35ee5f16187b6f33644f62f95.scope. Nov 1 00:44:27.933001 env[1441]: time="2025-11-01T00:44:27.932948756Z" level=info msg="StartContainer for \"90229138344bbe63fda92263fb51d1db861173c35ee5f16187b6f33644f62f95\" returns successfully" Nov 1 00:44:27.936434 systemd[1]: cri-containerd-90229138344bbe63fda92263fb51d1db861173c35ee5f16187b6f33644f62f95.scope: Deactivated successfully. Nov 1 00:44:27.962806 env[1441]: time="2025-11-01T00:44:27.962752127Z" level=info msg="shim disconnected" id=90229138344bbe63fda92263fb51d1db861173c35ee5f16187b6f33644f62f95 Nov 1 00:44:27.962806 env[1441]: time="2025-11-01T00:44:27.962807928Z" level=warning msg="cleaning up after shim disconnected" id=90229138344bbe63fda92263fb51d1db861173c35ee5f16187b6f33644f62f95 namespace=k8s.io Nov 1 00:44:27.963303 env[1441]: time="2025-11-01T00:44:27.962819128Z" level=info msg="cleaning up dead shim" Nov 1 00:44:27.970371 env[1441]: time="2025-11-01T00:44:27.970336547Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4503 runtime=io.containerd.runc.v2\n" Nov 1 00:44:28.407963 kubelet[2445]: I1101 00:44:28.407920 2445 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cecbeaa-1b52-4a19-83e5-731acb614446" path="/var/lib/kubelet/pods/4cecbeaa-1b52-4a19-83e5-731acb614446/volumes" Nov 1 00:44:28.859937 env[1441]: time="2025-11-01T00:44:28.859889708Z" level=info msg="CreateContainer within sandbox \"56e1c689be583d7ab400ffb457db5fe90b3d957eae3a57f238b30bb91aa37714\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:44:28.892300 env[1441]: time="2025-11-01T00:44:28.892194814Z" level=info msg="CreateContainer within sandbox \"56e1c689be583d7ab400ffb457db5fe90b3d957eae3a57f238b30bb91aa37714\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a6381b4954c443ec2b051f53a15c4addd7cec51159d5e8ce4daf4474fb1777a5\"" Nov 1 00:44:28.893346 env[1441]: time="2025-11-01T00:44:28.893289032Z" level=info msg="StartContainer for \"a6381b4954c443ec2b051f53a15c4addd7cec51159d5e8ce4daf4474fb1777a5\"" Nov 1 00:44:28.920645 systemd[1]: Started cri-containerd-a6381b4954c443ec2b051f53a15c4addd7cec51159d5e8ce4daf4474fb1777a5.scope. Nov 1 00:44:28.949861 systemd[1]: cri-containerd-a6381b4954c443ec2b051f53a15c4addd7cec51159d5e8ce4daf4474fb1777a5.scope: Deactivated successfully. Nov 1 00:44:28.952904 env[1441]: time="2025-11-01T00:44:28.952518161Z" level=info msg="StartContainer for \"a6381b4954c443ec2b051f53a15c4addd7cec51159d5e8ce4daf4474fb1777a5\" returns successfully" Nov 1 00:44:28.980087 env[1441]: time="2025-11-01T00:44:28.980047593Z" level=info msg="shim disconnected" id=a6381b4954c443ec2b051f53a15c4addd7cec51159d5e8ce4daf4474fb1777a5 Nov 1 00:44:28.980300 env[1441]: time="2025-11-01T00:44:28.980093594Z" level=warning msg="cleaning up after shim disconnected" id=a6381b4954c443ec2b051f53a15c4addd7cec51159d5e8ce4daf4474fb1777a5 namespace=k8s.io Nov 1 00:44:28.980300 env[1441]: time="2025-11-01T00:44:28.980105694Z" level=info msg="cleaning up dead shim" Nov 1 00:44:28.988567 env[1441]: time="2025-11-01T00:44:28.988526426Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4560 runtime=io.containerd.runc.v2\n" Nov 1 00:44:29.074711 systemd[1]: run-containerd-runc-k8s.io-a6381b4954c443ec2b051f53a15c4addd7cec51159d5e8ce4daf4474fb1777a5-runc.sXb6mO.mount: Deactivated successfully. Nov 1 00:44:29.074855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6381b4954c443ec2b051f53a15c4addd7cec51159d5e8ce4daf4474fb1777a5-rootfs.mount: Deactivated successfully. Nov 1 00:44:29.863559 env[1441]: time="2025-11-01T00:44:29.863503558Z" level=info msg="CreateContainer within sandbox \"56e1c689be583d7ab400ffb457db5fe90b3d957eae3a57f238b30bb91aa37714\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:44:29.892299 env[1441]: time="2025-11-01T00:44:29.892247006Z" level=info msg="CreateContainer within sandbox \"56e1c689be583d7ab400ffb457db5fe90b3d957eae3a57f238b30bb91aa37714\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5ea6e5375708e22ecd6bcb9dcf856b0ca5caa6d1708dc265eee25ea8b3fadc2e\"" Nov 1 00:44:29.893867 env[1441]: time="2025-11-01T00:44:29.892908516Z" level=info msg="StartContainer for \"5ea6e5375708e22ecd6bcb9dcf856b0ca5caa6d1708dc265eee25ea8b3fadc2e\"" Nov 1 00:44:29.920038 systemd[1]: Started cri-containerd-5ea6e5375708e22ecd6bcb9dcf856b0ca5caa6d1708dc265eee25ea8b3fadc2e.scope. Nov 1 00:44:29.943799 systemd[1]: cri-containerd-5ea6e5375708e22ecd6bcb9dcf856b0ca5caa6d1708dc265eee25ea8b3fadc2e.scope: Deactivated successfully. Nov 1 00:44:29.949545 env[1441]: time="2025-11-01T00:44:29.949510398Z" level=info msg="StartContainer for \"5ea6e5375708e22ecd6bcb9dcf856b0ca5caa6d1708dc265eee25ea8b3fadc2e\" returns successfully" Nov 1 00:44:29.975788 env[1441]: time="2025-11-01T00:44:29.975745807Z" level=info msg="shim disconnected" id=5ea6e5375708e22ecd6bcb9dcf856b0ca5caa6d1708dc265eee25ea8b3fadc2e Nov 1 00:44:29.975788 env[1441]: time="2025-11-01T00:44:29.975787108Z" level=warning msg="cleaning up after shim disconnected" id=5ea6e5375708e22ecd6bcb9dcf856b0ca5caa6d1708dc265eee25ea8b3fadc2e namespace=k8s.io Nov 1 00:44:29.976094 env[1441]: time="2025-11-01T00:44:29.975798308Z" level=info msg="cleaning up dead shim" Nov 1 00:44:29.982857 env[1441]: time="2025-11-01T00:44:29.982800217Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4616 runtime=io.containerd.runc.v2\n" Nov 1 00:44:30.074204 systemd[1]: run-containerd-runc-k8s.io-5ea6e5375708e22ecd6bcb9dcf856b0ca5caa6d1708dc265eee25ea8b3fadc2e-runc.zLJRgc.mount: Deactivated successfully. Nov 1 00:44:30.074390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ea6e5375708e22ecd6bcb9dcf856b0ca5caa6d1708dc265eee25ea8b3fadc2e-rootfs.mount: Deactivated successfully. Nov 1 00:44:30.476721 kubelet[2445]: E1101 00:44:30.476680 2445 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:44:30.575377 kubelet[2445]: W1101 00:44:30.575326 2445 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aaeeec4_ad86_48e3_862e_482baee6687e.slice/cri-containerd-864ec60c778f6864b01b07588e0a8ac176cb3d9730f51ccccc31e0a5f332838d.scope WatchSource:0}: task 864ec60c778f6864b01b07588e0a8ac176cb3d9730f51ccccc31e0a5f332838d not found Nov 1 00:44:30.871471 env[1441]: time="2025-11-01T00:44:30.871423265Z" level=info msg="CreateContainer within sandbox \"56e1c689be583d7ab400ffb457db5fe90b3d957eae3a57f238b30bb91aa37714\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:44:30.905299 env[1441]: time="2025-11-01T00:44:30.905255488Z" level=info msg="CreateContainer within sandbox \"56e1c689be583d7ab400ffb457db5fe90b3d957eae3a57f238b30bb91aa37714\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e87771cda26270dcb3fc5f45a9ac67e74e2e3f3b745e62afc772960e5e9a32d3\"" Nov 1 00:44:30.905972 env[1441]: time="2025-11-01T00:44:30.905904198Z" level=info msg="StartContainer for \"e87771cda26270dcb3fc5f45a9ac67e74e2e3f3b745e62afc772960e5e9a32d3\"" Nov 1 00:44:30.931203 systemd[1]: Started cri-containerd-e87771cda26270dcb3fc5f45a9ac67e74e2e3f3b745e62afc772960e5e9a32d3.scope. Nov 1 00:44:30.967901 env[1441]: time="2025-11-01T00:44:30.967850756Z" level=info msg="StartContainer for \"e87771cda26270dcb3fc5f45a9ac67e74e2e3f3b745e62afc772960e5e9a32d3\" returns successfully" Nov 1 00:44:31.074297 systemd[1]: run-containerd-runc-k8s.io-e87771cda26270dcb3fc5f45a9ac67e74e2e3f3b745e62afc772960e5e9a32d3-runc.HJZfik.mount: Deactivated successfully. Nov 1 00:44:31.357019 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 1 00:44:31.887617 kubelet[2445]: I1101 00:44:31.887556 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mw6d2" podStartSLOduration=5.887525387 podStartE2EDuration="5.887525387s" podCreationTimestamp="2025-11-01 00:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:44:31.887243082 +0000 UTC m=+192.136755344" watchObservedRunningTime="2025-11-01 00:44:31.887525387 +0000 UTC m=+192.137037649" Nov 1 00:44:32.319260 systemd[1]: run-containerd-runc-k8s.io-e87771cda26270dcb3fc5f45a9ac67e74e2e3f3b745e62afc772960e5e9a32d3-runc.339Gmw.mount: Deactivated successfully. Nov 1 00:44:33.682769 kubelet[2445]: W1101 00:44:33.682708 2445 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aaeeec4_ad86_48e3_862e_482baee6687e.slice/cri-containerd-90229138344bbe63fda92263fb51d1db861173c35ee5f16187b6f33644f62f95.scope WatchSource:0}: task 90229138344bbe63fda92263fb51d1db861173c35ee5f16187b6f33644f62f95 not found Nov 1 00:44:34.154029 systemd-networkd[1584]: lxc_health: Link UP Nov 1 00:44:34.163361 systemd-networkd[1584]: lxc_health: Gained carrier Nov 1 00:44:34.164011 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:44:34.471672 systemd[1]: run-containerd-runc-k8s.io-e87771cda26270dcb3fc5f45a9ac67e74e2e3f3b745e62afc772960e5e9a32d3-runc.CJ4kLJ.mount: Deactivated successfully. Nov 1 00:44:35.419167 systemd-networkd[1584]: lxc_health: Gained IPv6LL Nov 1 00:44:36.653912 systemd[1]: run-containerd-runc-k8s.io-e87771cda26270dcb3fc5f45a9ac67e74e2e3f3b745e62afc772960e5e9a32d3-runc.hKd7JP.mount: Deactivated successfully. Nov 1 00:44:36.796477 kubelet[2445]: W1101 00:44:36.796424 2445 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aaeeec4_ad86_48e3_862e_482baee6687e.slice/cri-containerd-a6381b4954c443ec2b051f53a15c4addd7cec51159d5e8ce4daf4474fb1777a5.scope WatchSource:0}: task a6381b4954c443ec2b051f53a15c4addd7cec51159d5e8ce4daf4474fb1777a5 not found Nov 1 00:44:38.794499 systemd[1]: run-containerd-runc-k8s.io-e87771cda26270dcb3fc5f45a9ac67e74e2e3f3b745e62afc772960e5e9a32d3-runc.ei4AGg.mount: Deactivated successfully. Nov 1 00:44:39.910787 kubelet[2445]: W1101 00:44:39.910740 2445 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aaeeec4_ad86_48e3_862e_482baee6687e.slice/cri-containerd-5ea6e5375708e22ecd6bcb9dcf856b0ca5caa6d1708dc265eee25ea8b3fadc2e.scope WatchSource:0}: task 5ea6e5375708e22ecd6bcb9dcf856b0ca5caa6d1708dc265eee25ea8b3fadc2e not found Nov 1 00:44:41.150811 sshd[4301]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:41.154463 systemd[1]: sshd@22-10.200.4.33:22-10.200.16.10:46830.service: Deactivated successfully. Nov 1 00:44:41.155589 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:44:41.156461 systemd-logind[1429]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:44:41.157361 systemd-logind[1429]: Removed session 25. Nov 1 00:44:44.753321 kubelet[2445]: E1101 00:44:44.753281 2445 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: EOF"