Nov 1 00:58:17.054461 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 00:58:17.054485 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:58:17.054497 kernel: BIOS-provided physical RAM map: Nov 1 00:58:17.054503 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 00:58:17.054509 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 1 00:58:17.054518 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 1 00:58:17.054528 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Nov 1 00:58:17.054536 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 1 00:58:17.054542 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 1 00:58:17.054549 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 1 00:58:17.054556 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 1 00:58:17.054564 kernel: printk: bootconsole [earlyser0] enabled Nov 1 00:58:17.054571 kernel: NX (Execute Disable) protection: active Nov 1 00:58:17.054577 kernel: efi: EFI v2.70 by Microsoft Nov 1 00:58:17.054590 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Nov 1 00:58:17.054598 kernel: random: crng init done Nov 1 00:58:17.054605 kernel: SMBIOS 3.1.0 present. Nov 1 00:58:17.054612 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Nov 1 00:58:17.054621 kernel: Hypervisor detected: Microsoft Hyper-V Nov 1 00:58:17.054629 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 1 00:58:17.054637 kernel: Hyper-V Host Build:20348-10.0-1-0.1827 Nov 1 00:58:17.054643 kernel: Hyper-V: Nested features: 0x1e0101 Nov 1 00:58:17.054654 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 1 00:58:17.054661 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 1 00:58:17.054670 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 1 00:58:17.054676 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 1 00:58:17.054685 kernel: tsc: Detected 2593.905 MHz processor Nov 1 00:58:17.054692 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:58:17.054702 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:58:17.054708 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 1 00:58:17.054716 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:58:17.054724 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 1 00:58:17.054736 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 1 00:58:17.054742 kernel: Using GB pages for direct mapping Nov 1 00:58:17.054750 kernel: Secure boot disabled Nov 1 00:58:17.054758 kernel: ACPI: Early table checksum verification disabled Nov 1 00:58:17.054768 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 1 00:58:17.054775 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:58:17.054784 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:58:17.054792 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Nov 1 00:58:17.054804 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 1 00:58:17.054811 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:58:17.054817 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:58:17.054824 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:58:17.054831 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:58:17.054841 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:58:17.054857 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:58:17.054868 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:58:17.054876 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 1 00:58:17.054883 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Nov 1 00:58:17.054897 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 1 00:58:17.054911 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 1 00:58:17.054924 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 1 00:58:17.054950 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 1 00:58:17.054960 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 1 00:58:17.054969 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 1 00:58:17.054985 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 1 00:58:17.054998 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Nov 1 00:58:17.055008 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:58:17.055015 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:58:17.055027 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 1 00:58:17.055040 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 1 00:58:17.055054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 1 00:58:17.055067 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 1 00:58:17.055074 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 1 00:58:17.055084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 1 00:58:17.055100 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 1 00:58:17.055114 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 1 00:58:17.055123 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 1 00:58:17.055130 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 1 00:58:17.055143 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 1 00:58:17.055157 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 1 00:58:17.055174 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 1 00:58:17.055186 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 1 00:58:17.055192 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 1 00:58:17.055203 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 1 00:58:17.055217 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 1 00:58:17.055231 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 1 00:58:17.055239 kernel: Zone ranges: Nov 1 00:58:17.055246 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:58:17.055259 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 00:58:17.055279 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 1 00:58:17.055294 kernel: Movable zone start for each node Nov 1 00:58:17.055308 kernel: Early memory node ranges Nov 1 00:58:17.055322 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 1 00:58:17.055331 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 1 00:58:17.055338 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 1 00:58:17.055349 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 1 00:58:17.055362 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 1 00:58:17.055375 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:58:17.055392 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 1 00:58:17.055405 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Nov 1 00:58:17.055418 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 1 00:58:17.055431 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 1 00:58:17.055440 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:58:17.055447 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:58:17.055458 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:58:17.055471 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 1 00:58:17.055485 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:58:17.055502 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 1 00:58:17.055510 kernel: Booting paravirtualized kernel on Hyper-V Nov 1 00:58:17.055518 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:58:17.055532 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:58:17.055545 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Nov 1 00:58:17.055557 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Nov 1 00:58:17.055564 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:58:17.055572 kernel: Hyper-V: PV spinlocks enabled Nov 1 00:58:17.055586 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:58:17.055606 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Nov 1 00:58:17.055618 kernel: Policy zone: Normal Nov 1 00:58:17.055627 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:58:17.055635 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:58:17.055650 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 1 00:58:17.055664 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:58:17.055678 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:58:17.055688 kernel: Memory: 8071676K/8387460K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 315524K reserved, 0K cma-reserved) Nov 1 00:58:17.055698 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:58:17.055713 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 00:58:17.055736 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 00:58:17.055746 kernel: rcu: Hierarchical RCU implementation. Nov 1 00:58:17.055760 kernel: rcu: RCU event tracing is enabled. Nov 1 00:58:17.055772 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:58:17.055779 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:58:17.055791 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:58:17.055806 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:58:17.055816 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:58:17.055823 kernel: Using NULL legacy PIC Nov 1 00:58:17.055842 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 1 00:58:17.055854 kernel: Console: colour dummy device 80x25 Nov 1 00:58:17.055863 kernel: printk: console [tty1] enabled Nov 1 00:58:17.055875 kernel: printk: console [ttyS0] enabled Nov 1 00:58:17.055889 kernel: printk: bootconsole [earlyser0] disabled Nov 1 00:58:17.055903 kernel: ACPI: Core revision 20210730 Nov 1 00:58:17.055915 kernel: Failed to register legacy timer interrupt Nov 1 00:58:17.055960 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:58:17.055974 kernel: Hyper-V: Using IPI hypercalls Nov 1 00:58:17.055982 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Nov 1 00:58:17.055989 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 00:58:17.055996 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 00:58:17.056004 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:58:17.056016 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:58:17.056031 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:58:17.056049 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 1 00:58:17.056056 kernel: RETBleed: Vulnerable Nov 1 00:58:17.056065 kernel: Speculative Store Bypass: Vulnerable Nov 1 00:58:17.056080 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:58:17.056092 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:58:17.056099 kernel: active return thunk: its_return_thunk Nov 1 00:58:17.056111 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:58:17.056126 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:58:17.056138 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:58:17.056145 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:58:17.056160 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 1 00:58:17.056174 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 1 00:58:17.056187 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 1 00:58:17.056195 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:58:17.056202 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 1 00:58:17.056209 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 1 00:58:17.056216 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 1 00:58:17.056223 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 1 00:58:17.056231 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:58:17.056241 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:58:17.056253 kernel: LSM: Security Framework initializing Nov 1 00:58:17.056265 kernel: SELinux: Initializing. Nov 1 00:58:17.056279 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:58:17.056890 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:58:17.056907 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 1 00:58:17.056917 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 1 00:58:17.056926 kernel: signal: max sigframe size: 3632 Nov 1 00:58:17.058397 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:58:17.058409 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:58:17.058421 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:58:17.058433 kernel: x86: Booting SMP configuration: Nov 1 00:58:17.058445 kernel: .... node #0, CPUs: #1 Nov 1 00:58:17.058465 kernel: Transient Scheduler Attacks: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 1 00:58:17.058479 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 00:58:17.058493 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:58:17.058507 kernel: smpboot: Max logical packages: 1 Nov 1 00:58:17.058520 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Nov 1 00:58:17.058533 kernel: devtmpfs: initialized Nov 1 00:58:17.058547 kernel: x86/mm: Memory block size: 128MB Nov 1 00:58:17.058560 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 1 00:58:17.058577 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:58:17.058590 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:58:17.058603 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:58:17.058617 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:58:17.058630 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:58:17.058644 kernel: audit: type=2000 audit(1761958696.023:1): state=initialized audit_enabled=0 res=1 Nov 1 00:58:17.058657 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:58:17.058670 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:58:17.058684 kernel: cpuidle: using governor menu Nov 1 00:58:17.058701 kernel: ACPI: bus type PCI registered Nov 1 00:58:17.058715 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:58:17.058729 kernel: dca service started, version 1.12.1 Nov 1 00:58:17.058743 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:58:17.058757 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:58:17.058771 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:58:17.058784 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:58:17.058798 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:58:17.058812 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:58:17.058828 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:58:17.058841 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:58:17.058855 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:58:17.058868 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:58:17.058881 kernel: ACPI: Interpreter enabled Nov 1 00:58:17.058895 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:58:17.058908 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:58:17.058922 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:58:17.058945 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 1 00:58:17.058959 kernel: iommu: Default domain type: Translated Nov 1 00:58:17.058970 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:58:17.058981 kernel: vgaarb: loaded Nov 1 00:58:17.058989 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:58:17.058997 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:58:17.059004 kernel: PTP clock support registered Nov 1 00:58:17.059011 kernel: Registered efivars operations Nov 1 00:58:17.059021 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:58:17.059031 kernel: PCI: System does not support PCI Nov 1 00:58:17.059043 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 1 00:58:17.059051 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:58:17.059060 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:58:17.059069 kernel: pnp: PnP ACPI init Nov 1 00:58:17.059077 kernel: pnp: PnP ACPI: found 3 devices Nov 1 00:58:17.059087 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:58:17.059096 kernel: NET: Registered PF_INET protocol family Nov 1 00:58:17.059105 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:58:17.059116 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 1 00:58:17.059129 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:58:17.059138 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:58:17.059148 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Nov 1 00:58:17.059156 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 1 00:58:17.059167 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:58:17.059176 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:58:17.059187 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:58:17.059196 kernel: NET: Registered PF_XDP protocol family Nov 1 00:58:17.059208 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:58:17.059221 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:58:17.059231 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Nov 1 00:58:17.059242 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:58:17.059254 kernel: Initialise system trusted keyrings Nov 1 00:58:17.059265 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 1 00:58:17.059275 kernel: Key type asymmetric registered Nov 1 00:58:17.059282 kernel: Asymmetric key parser 'x509' registered Nov 1 00:58:17.059292 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:58:17.059301 kernel: io scheduler mq-deadline registered Nov 1 00:58:17.059313 kernel: io scheduler kyber registered Nov 1 00:58:17.059324 kernel: io scheduler bfq registered Nov 1 00:58:17.059331 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:58:17.059339 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:58:17.059349 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:58:17.059360 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 00:58:17.059368 kernel: i8042: PNP: No PS/2 controller found. Nov 1 00:58:17.059519 kernel: rtc_cmos 00:02: registered as rtc0 Nov 1 00:58:17.059617 kernel: rtc_cmos 00:02: setting system clock to 2025-11-01T00:58:16 UTC (1761958696) Nov 1 00:58:17.059702 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 1 00:58:17.059713 kernel: intel_pstate: CPU model not supported Nov 1 00:58:17.059723 kernel: efifb: probing for efifb Nov 1 00:58:17.059730 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 1 00:58:17.059741 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 1 00:58:17.059750 kernel: efifb: scrolling: redraw Nov 1 00:58:17.059759 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 1 00:58:17.059767 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 00:58:17.059779 kernel: fb0: EFI VGA frame buffer device Nov 1 00:58:17.059790 kernel: pstore: Registered efi as persistent store backend Nov 1 00:58:17.059798 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:58:17.059808 kernel: Segment Routing with IPv6 Nov 1 00:58:17.059818 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:58:17.059826 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:58:17.059835 kernel: Key type dns_resolver registered Nov 1 00:58:17.059844 kernel: IPI shorthand broadcast: enabled Nov 1 00:58:17.059855 kernel: sched_clock: Marking stable (697848500, 19229100)->(889229300, -172151700) Nov 1 00:58:17.059864 kernel: registered taskstats version 1 Nov 1 00:58:17.059875 kernel: Loading compiled-in X.509 certificates Nov 1 00:58:17.059884 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 00:58:17.059893 kernel: Key type .fscrypt registered Nov 1 00:58:17.059901 kernel: Key type fscrypt-provisioning registered Nov 1 00:58:17.059910 kernel: pstore: Using crash dump compression: deflate Nov 1 00:58:17.059921 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:58:17.071862 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:58:17.071887 kernel: ima: No architecture policies found Nov 1 00:58:17.071896 kernel: clk: Disabling unused clocks Nov 1 00:58:17.071907 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 00:58:17.071915 kernel: Write protecting the kernel read-only data: 28672k Nov 1 00:58:17.071924 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 00:58:17.071956 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 00:58:17.071966 kernel: Run /init as init process Nov 1 00:58:17.071976 kernel: with arguments: Nov 1 00:58:17.071984 kernel: /init Nov 1 00:58:17.071994 kernel: with environment: Nov 1 00:58:17.072006 kernel: HOME=/ Nov 1 00:58:17.072015 kernel: TERM=linux Nov 1 00:58:17.072023 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:58:17.072037 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:58:17.072049 systemd[1]: Detected virtualization microsoft. Nov 1 00:58:17.072058 systemd[1]: Detected architecture x86-64. Nov 1 00:58:17.072067 systemd[1]: Running in initrd. Nov 1 00:58:17.072080 systemd[1]: No hostname configured, using default hostname. Nov 1 00:58:17.072088 systemd[1]: Hostname set to . Nov 1 00:58:17.072099 systemd[1]: Initializing machine ID from random generator. Nov 1 00:58:17.072110 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:58:17.072118 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:58:17.072129 systemd[1]: Reached target cryptsetup.target. Nov 1 00:58:17.072139 systemd[1]: Reached target paths.target. Nov 1 00:58:17.072148 systemd[1]: Reached target slices.target. Nov 1 00:58:17.072157 systemd[1]: Reached target swap.target. Nov 1 00:58:17.072169 systemd[1]: Reached target timers.target. Nov 1 00:58:17.072180 systemd[1]: Listening on iscsid.socket. Nov 1 00:58:17.072188 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:58:17.072198 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:58:17.072210 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:58:17.072218 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:58:17.072229 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:58:17.072243 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:58:17.072251 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:58:17.072262 systemd[1]: Reached target sockets.target. Nov 1 00:58:17.072272 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:58:17.072281 systemd[1]: Finished network-cleanup.service. Nov 1 00:58:17.072290 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:58:17.072300 systemd[1]: Starting systemd-journald.service... Nov 1 00:58:17.072311 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:58:17.072319 systemd[1]: Starting systemd-resolved.service... Nov 1 00:58:17.072332 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:58:17.072343 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:58:17.072351 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:58:17.072362 kernel: audit: type=1130 audit(1761958697.058:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.072372 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:58:17.072387 systemd-journald[183]: Journal started Nov 1 00:58:17.072452 systemd-journald[183]: Runtime Journal (/run/log/journal/b301801534f1484b9e54c55f658e4552) is 8.0M, max 159.0M, 151.0M free. Nov 1 00:58:17.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.046820 systemd-modules-load[184]: Inserted module 'overlay' Nov 1 00:58:17.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.104150 systemd[1]: Started systemd-journald.service. Nov 1 00:58:17.104217 kernel: audit: type=1130 audit(1761958697.083:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.106429 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:58:17.113286 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:58:17.119397 systemd-resolved[185]: Positive Trust Anchors: Nov 1 00:58:17.135277 kernel: audit: type=1130 audit(1761958697.104:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.135308 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:58:17.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.135490 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:58:17.135531 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:58:17.138559 systemd-resolved[185]: Defaulting to hostname 'linux'. Nov 1 00:58:17.139489 systemd[1]: Started systemd-resolved.service. Nov 1 00:58:17.148802 systemd[1]: Reached target nss-lookup.target. Nov 1 00:58:17.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.188612 kernel: audit: type=1130 audit(1761958697.147:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.188675 kernel: Bridge firewalling registered Nov 1 00:58:17.186252 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:58:17.203881 kernel: audit: type=1130 audit(1761958697.190:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.190958 systemd-modules-load[184]: Inserted module 'br_netfilter' Nov 1 00:58:17.220485 kernel: audit: type=1130 audit(1761958697.203:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.201429 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:58:17.206025 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:58:17.233145 dracut-cmdline[200]: dracut-dracut-053 Nov 1 00:58:17.236959 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:58:17.259953 kernel: SCSI subsystem initialized Nov 1 00:58:17.284422 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:58:17.284488 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:58:17.289798 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:58:17.294039 systemd-modules-load[184]: Inserted module 'dm_multipath' Nov 1 00:58:17.295795 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:58:17.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.312018 kernel: audit: type=1130 audit(1761958697.300:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.312231 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:58:17.325472 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:58:17.342687 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:58:17.342719 kernel: audit: type=1130 audit(1761958697.329:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.361954 kernel: iscsi: registered transport (tcp) Nov 1 00:58:17.388637 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:58:17.388715 kernel: QLogic iSCSI HBA Driver Nov 1 00:58:17.418450 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:58:17.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.425054 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:58:17.438800 kernel: audit: type=1130 audit(1761958697.421:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.479962 kernel: raid6: avx512x4 gen() 18436 MB/s Nov 1 00:58:17.499948 kernel: raid6: avx512x4 xor() 8280 MB/s Nov 1 00:58:17.518942 kernel: raid6: avx512x2 gen() 18443 MB/s Nov 1 00:58:17.538948 kernel: raid6: avx512x2 xor() 29823 MB/s Nov 1 00:58:17.557945 kernel: raid6: avx512x1 gen() 18561 MB/s Nov 1 00:58:17.576943 kernel: raid6: avx512x1 xor() 26830 MB/s Nov 1 00:58:17.596946 kernel: raid6: avx2x4 gen() 18455 MB/s Nov 1 00:58:17.616943 kernel: raid6: avx2x4 xor() 7597 MB/s Nov 1 00:58:17.636942 kernel: raid6: avx2x2 gen() 18536 MB/s Nov 1 00:58:17.656946 kernel: raid6: avx2x2 xor() 22182 MB/s Nov 1 00:58:17.676942 kernel: raid6: avx2x1 gen() 14159 MB/s Nov 1 00:58:17.696942 kernel: raid6: avx2x1 xor() 19406 MB/s Nov 1 00:58:17.716945 kernel: raid6: sse2x4 gen() 11503 MB/s Nov 1 00:58:17.735944 kernel: raid6: sse2x4 xor() 7174 MB/s Nov 1 00:58:17.754941 kernel: raid6: sse2x2 gen() 12752 MB/s Nov 1 00:58:17.774944 kernel: raid6: sse2x2 xor() 7434 MB/s Nov 1 00:58:17.795942 kernel: raid6: sse2x1 gen() 11632 MB/s Nov 1 00:58:17.819256 kernel: raid6: sse2x1 xor() 5849 MB/s Nov 1 00:58:17.819287 kernel: raid6: using algorithm avx512x1 gen() 18561 MB/s Nov 1 00:58:17.819304 kernel: raid6: .... xor() 26830 MB/s, rmw enabled Nov 1 00:58:17.822222 kernel: raid6: using avx512x2 recovery algorithm Nov 1 00:58:17.840954 kernel: xor: automatically using best checksumming function avx Nov 1 00:58:17.936956 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 00:58:17.945664 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:58:17.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.948000 audit: BPF prog-id=7 op=LOAD Nov 1 00:58:17.948000 audit: BPF prog-id=8 op=LOAD Nov 1 00:58:17.950276 systemd[1]: Starting systemd-udevd.service... Nov 1 00:58:17.965040 systemd-udevd[383]: Using default interface naming scheme 'v252'. Nov 1 00:58:17.971548 systemd[1]: Started systemd-udevd.service. Nov 1 00:58:17.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:17.977552 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:58:17.992525 dracut-pre-trigger[396]: rd.md=0: removing MD RAID activation Nov 1 00:58:18.025391 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:58:18.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:18.030338 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:58:18.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:18.066682 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:58:18.117956 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:58:18.143158 kernel: hv_vmbus: Vmbus version:5.2 Nov 1 00:58:18.151954 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 1 00:58:18.164168 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 1 00:58:18.164233 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:58:18.191972 kernel: AES CTR mode by8 optimization enabled Nov 1 00:58:18.192029 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 00:58:18.197061 kernel: hv_vmbus: registering driver hv_storvsc Nov 1 00:58:18.204948 kernel: hv_vmbus: registering driver hid_hyperv Nov 1 00:58:18.208961 kernel: scsi host1: storvsc_host_t Nov 1 00:58:18.209172 kernel: hv_vmbus: registering driver hv_netvsc Nov 1 00:58:18.214022 kernel: scsi host0: storvsc_host_t Nov 1 00:58:18.214225 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 1 00:58:18.222998 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 1 00:58:18.228071 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 1 00:58:18.233243 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 1 00:58:18.264946 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 1 00:58:18.282090 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 1 00:58:18.282285 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 1 00:58:18.282436 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 1 00:58:18.282581 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 1 00:58:18.282725 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:58:18.282743 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 1 00:58:18.289066 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 1 00:58:18.297119 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:58:18.297141 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 1 00:58:18.408641 kernel: hv_netvsc 7c1e522e-8926-7c1e-522e-89267c1e522e eth0: VF slot 1 added Nov 1 00:58:18.417952 kernel: hv_vmbus: registering driver hv_pci Nov 1 00:58:18.423956 kernel: hv_pci 2b8cfd7d-2721-4bf8-8ffa-14a044a5f44e: PCI VMBus probing: Using version 0x10004 Nov 1 00:58:18.478104 kernel: hv_pci 2b8cfd7d-2721-4bf8-8ffa-14a044a5f44e: PCI host bridge to bus 2721:00 Nov 1 00:58:18.478297 kernel: pci_bus 2721:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 1 00:58:18.478477 kernel: pci_bus 2721:00: No busn resource found for root bus, will use [bus 00-ff] Nov 1 00:58:18.478637 kernel: pci 2721:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 1 00:58:18.478817 kernel: pci 2721:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 1 00:58:18.478996 kernel: pci 2721:00:02.0: enabling Extended Tags Nov 1 00:58:18.479155 kernel: pci 2721:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2721:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 1 00:58:18.479319 kernel: pci_bus 2721:00: busn_res: [bus 00-ff] end is updated to 00 Nov 1 00:58:18.479470 kernel: pci 2721:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 1 00:58:18.572978 kernel: mlx5_core 2721:00:02.0: enabling device (0000 -> 0002) Nov 1 00:58:18.834048 kernel: mlx5_core 2721:00:02.0: firmware version: 14.30.5006 Nov 1 00:58:18.834237 kernel: mlx5_core 2721:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Nov 1 00:58:18.834396 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (442) Nov 1 00:58:18.834414 kernel: mlx5_core 2721:00:02.0: Supported tc offload range - chains: 1, prios: 1 Nov 1 00:58:18.834569 kernel: mlx5_core 2721:00:02.0: mlx5e_tc_post_act_init:40:(pid 188): firmware level support is missing Nov 1 00:58:18.834732 kernel: hv_netvsc 7c1e522e-8926-7c1e-522e-89267c1e522e eth0: VF registering: eth1 Nov 1 00:58:18.834883 kernel: mlx5_core 2721:00:02.0 eth1: joined to eth0 Nov 1 00:58:18.715247 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:58:18.759511 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:58:18.845951 kernel: mlx5_core 2721:00:02.0 enP10017s1: renamed from eth1 Nov 1 00:58:18.868119 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:58:18.881106 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:58:18.883677 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:58:18.888885 systemd[1]: Starting disk-uuid.service... Nov 1 00:58:18.906951 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:58:18.917959 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:58:19.930950 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:58:19.931014 disk-uuid[565]: The operation has completed successfully. Nov 1 00:58:20.011758 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:58:20.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:20.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:20.011868 systemd[1]: Finished disk-uuid.service. Nov 1 00:58:20.029399 systemd[1]: Starting verity-setup.service... Nov 1 00:58:20.063952 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:58:20.301706 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:58:20.305255 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:58:20.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:20.310511 systemd[1]: Finished verity-setup.service. Nov 1 00:58:20.387213 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:58:20.387657 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:58:20.390867 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:58:20.394777 systemd[1]: Starting ignition-setup.service... Nov 1 00:58:20.399133 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:58:20.427807 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:58:20.427888 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:58:20.427908 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:58:20.468380 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:58:20.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:20.471000 audit: BPF prog-id=9 op=LOAD Nov 1 00:58:20.473559 systemd[1]: Starting systemd-networkd.service... Nov 1 00:58:20.500060 systemd-networkd[829]: lo: Link UP Nov 1 00:58:20.501527 systemd-networkd[829]: lo: Gained carrier Nov 1 00:58:20.502087 systemd-networkd[829]: Enumeration completed Nov 1 00:58:20.506910 systemd-networkd[829]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:58:20.506970 systemd[1]: Started systemd-networkd.service. Nov 1 00:58:20.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:20.513550 systemd[1]: Reached target network.target. Nov 1 00:58:20.517826 systemd[1]: Starting iscsiuio.service... Nov 1 00:58:20.524431 systemd[1]: Started iscsiuio.service. Nov 1 00:58:20.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:20.526857 systemd[1]: Starting iscsid.service... Nov 1 00:58:20.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:20.536118 iscsid[837]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:58:20.536118 iscsid[837]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Nov 1 00:58:20.536118 iscsid[837]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:58:20.536118 iscsid[837]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:58:20.536118 iscsid[837]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:58:20.536118 iscsid[837]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:58:20.536118 iscsid[837]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:58:20.534180 systemd[1]: Started iscsid.service. Nov 1 00:58:20.537111 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:58:20.584220 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:58:20.586798 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:58:20.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:20.590898 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:58:20.601841 kernel: mlx5_core 2721:00:02.0 enP10017s1: Link up Nov 1 00:58:20.602162 kernel: buffer_size[0]=0 is not enough for lossless buffer Nov 1 00:58:20.592885 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:58:20.593851 systemd[1]: Reached target remote-fs.target. Nov 1 00:58:20.609029 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:58:20.617632 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:58:20.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:20.644757 kernel: hv_netvsc 7c1e522e-8926-7c1e-522e-89267c1e522e eth0: Data path switched to VF: enP10017s1 Nov 1 00:58:20.645053 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:58:20.645406 systemd-networkd[829]: enP10017s1: Link UP Nov 1 00:58:20.647060 systemd-networkd[829]: eth0: Link UP Nov 1 00:58:20.648142 systemd-networkd[829]: eth0: Gained carrier Nov 1 00:58:20.653114 systemd-networkd[829]: enP10017s1: Gained carrier Nov 1 00:58:20.674013 systemd-networkd[829]: eth0: DHCPv4 address 10.200.4.7/24, gateway 10.200.4.1 acquired from 168.63.129.16 Nov 1 00:58:20.698385 systemd[1]: Finished ignition-setup.service. Nov 1 00:58:20.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:20.703432 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:58:22.001096 systemd-networkd[829]: eth0: Gained IPv6LL Nov 1 00:58:23.654289 ignition[856]: Ignition 2.14.0 Nov 1 00:58:23.654308 ignition[856]: Stage: fetch-offline Nov 1 00:58:23.654400 ignition[856]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:58:23.654460 ignition[856]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:58:23.734520 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:58:23.734721 ignition[856]: parsed url from cmdline: "" Nov 1 00:58:23.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:23.736094 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:58:23.760960 kernel: kauditd_printk_skb: 17 callbacks suppressed Nov 1 00:58:23.760995 kernel: audit: type=1130 audit(1761958703.738:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:23.734726 ignition[856]: no config URL provided Nov 1 00:58:23.741069 systemd[1]: Starting ignition-fetch.service... Nov 1 00:58:23.734733 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:58:23.734743 ignition[856]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:58:23.734749 ignition[856]: failed to fetch config: resource requires networking Nov 1 00:58:23.735013 ignition[856]: Ignition finished successfully Nov 1 00:58:23.749447 ignition[862]: Ignition 2.14.0 Nov 1 00:58:23.749453 ignition[862]: Stage: fetch Nov 1 00:58:23.749559 ignition[862]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:58:23.749582 ignition[862]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:58:23.752810 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:58:23.754348 ignition[862]: parsed url from cmdline: "" Nov 1 00:58:23.754352 ignition[862]: no config URL provided Nov 1 00:58:23.754359 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:58:23.754381 ignition[862]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:58:23.755175 ignition[862]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 1 00:58:23.879329 ignition[862]: GET result: OK Nov 1 00:58:23.879509 ignition[862]: config has been read from IMDS userdata Nov 1 00:58:23.879556 ignition[862]: parsing config with SHA512: fd2437b9feac1c0ec0badcde63f8d2f7ddda40c147c92c2a52b1e65f0c66a0f3043bdec585fa066733789042f6a1993346f96853424e41a474b27e680961a0ba Nov 1 00:58:23.886862 unknown[862]: fetched base config from "system" Nov 1 00:58:23.886875 unknown[862]: fetched base config from "system" Nov 1 00:58:23.886883 unknown[862]: fetched user config from "azure" Nov 1 00:58:23.892785 ignition[862]: fetch: fetch complete Nov 1 00:58:23.892796 ignition[862]: fetch: fetch passed Nov 1 00:58:23.892861 ignition[862]: Ignition finished successfully Nov 1 00:58:23.898277 systemd[1]: Finished ignition-fetch.service. Nov 1 00:58:23.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:23.913962 kernel: audit: type=1130 audit(1761958703.899:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:23.913142 systemd[1]: Starting ignition-kargs.service... Nov 1 00:58:23.923696 ignition[868]: Ignition 2.14.0 Nov 1 00:58:23.923706 ignition[868]: Stage: kargs Nov 1 00:58:23.923846 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:58:23.923879 ignition[868]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:58:23.930875 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:58:23.952513 kernel: audit: type=1130 audit(1761958703.935:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:23.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:23.932085 ignition[868]: kargs: kargs passed Nov 1 00:58:23.933600 systemd[1]: Finished ignition-kargs.service. Nov 1 00:58:23.932134 ignition[868]: Ignition finished successfully Nov 1 00:58:23.936565 systemd[1]: Starting ignition-disks.service... Nov 1 00:58:23.951389 ignition[874]: Ignition 2.14.0 Nov 1 00:58:23.951396 ignition[874]: Stage: disks Nov 1 00:58:23.951517 ignition[874]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:58:23.951545 ignition[874]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:58:23.975732 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:58:23.977001 ignition[874]: disks: disks passed Nov 1 00:58:23.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:23.977908 systemd[1]: Finished ignition-disks.service. Nov 1 00:58:23.996432 kernel: audit: type=1130 audit(1761958703.981:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:23.977045 ignition[874]: Ignition finished successfully Nov 1 00:58:23.992053 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:58:23.996448 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:58:23.999766 systemd[1]: Reached target local-fs.target. Nov 1 00:58:24.006537 systemd[1]: Reached target sysinit.target. Nov 1 00:58:24.009717 systemd[1]: Reached target basic.target. Nov 1 00:58:24.014620 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:58:24.094018 systemd-fsck[882]: ROOT: clean, 637/7326000 files, 481088/7359488 blocks Nov 1 00:58:24.104036 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:58:24.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:24.109178 systemd[1]: Mounting sysroot.mount... Nov 1 00:58:24.122229 kernel: audit: type=1130 audit(1761958704.107:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:24.137954 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:58:24.138383 systemd[1]: Mounted sysroot.mount. Nov 1 00:58:24.141538 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:58:24.179522 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:58:24.185037 systemd[1]: Starting flatcar-metadata-hostname.service... Nov 1 00:58:24.189367 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:58:24.189410 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:58:24.192370 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:58:24.238799 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:58:24.244211 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:58:24.258955 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (893) Nov 1 00:58:24.264426 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:58:24.273665 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:58:24.273696 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:58:24.273706 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:58:24.282786 initrd-setup-root[922]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:58:24.310243 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:58:24.329529 initrd-setup-root[938]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:58:24.421294 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:58:24.854890 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:58:24.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:24.875999 kernel: audit: type=1130 audit(1761958704.856:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:24.870545 systemd[1]: Starting ignition-mount.service... Nov 1 00:58:24.873420 systemd[1]: Starting sysroot-boot.service... Nov 1 00:58:24.883527 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Nov 1 00:58:24.883673 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Nov 1 00:58:24.916755 systemd[1]: Finished sysroot-boot.service. Nov 1 00:58:24.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:24.932966 kernel: audit: type=1130 audit(1761958704.919:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:25.128413 ignition[962]: INFO : Ignition 2.14.0 Nov 1 00:58:25.128413 ignition[962]: INFO : Stage: mount Nov 1 00:58:25.132294 ignition[962]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:58:25.135126 ignition[962]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:58:25.145741 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:58:25.149795 ignition[962]: INFO : mount: mount passed Nov 1 00:58:25.151583 ignition[962]: INFO : Ignition finished successfully Nov 1 00:58:25.151803 systemd[1]: Finished ignition-mount.service. Nov 1 00:58:25.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:25.166953 kernel: audit: type=1130 audit(1761958705.155:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:25.786510 coreos-metadata[892]: Nov 01 00:58:25.786 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 1 00:58:25.804994 coreos-metadata[892]: Nov 01 00:58:25.804 INFO Fetch successful Nov 1 00:58:25.839741 coreos-metadata[892]: Nov 01 00:58:25.839 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 1 00:58:25.854121 coreos-metadata[892]: Nov 01 00:58:25.854 INFO Fetch successful Nov 1 00:58:25.872457 coreos-metadata[892]: Nov 01 00:58:25.872 INFO wrote hostname ci-3510.3.8-n-e458e05b0a to /sysroot/etc/hostname Nov 1 00:58:25.874693 systemd[1]: Finished flatcar-metadata-hostname.service. Nov 1 00:58:25.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:25.889883 systemd[1]: Starting ignition-files.service... Nov 1 00:58:25.893288 kernel: audit: type=1130 audit(1761958705.877:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:25.901447 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:58:25.917955 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (971) Nov 1 00:58:25.926336 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:58:25.926392 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:58:25.926407 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:58:26.105440 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:58:26.120370 ignition[990]: INFO : Ignition 2.14.0 Nov 1 00:58:26.120370 ignition[990]: INFO : Stage: files Nov 1 00:58:26.123927 ignition[990]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:58:26.123927 ignition[990]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:58:26.136493 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:58:26.214700 ignition[990]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:58:26.220262 ignition[990]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:58:26.220262 ignition[990]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:58:26.274611 ignition[990]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:58:26.278135 ignition[990]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:58:26.291682 unknown[990]: wrote ssh authorized keys file for user: core Nov 1 00:58:26.294145 ignition[990]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:58:26.312031 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:58:26.317002 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:58:26.361706 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:58:26.419250 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:58:26.478794 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:58:26.482996 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 1 00:58:26.709276 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:58:26.756644 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:58:26.761114 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:58:26.761114 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:58:26.769194 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:58:26.769194 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:58:26.776722 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:58:26.780622 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:58:26.780622 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:58:26.788107 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:58:26.792028 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:58:26.796156 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:58:26.799899 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:58:26.805379 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:58:26.812080 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Nov 1 00:58:26.816157 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:58:26.821271 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3806625893" Nov 1 00:58:26.821271 ignition[990]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3806625893": device or resource busy Nov 1 00:58:26.821271 ignition[990]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3806625893", trying btrfs: device or resource busy Nov 1 00:58:26.821271 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3806625893" Nov 1 00:58:26.821271 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3806625893" Nov 1 00:58:26.847834 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3806625893" Nov 1 00:58:26.847834 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3806625893" Nov 1 00:58:26.847834 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Nov 1 00:58:26.847834 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Nov 1 00:58:26.847834 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:58:26.827178 systemd[1]: mnt-oem3806625893.mount: Deactivated successfully. Nov 1 00:58:26.872702 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2003901312" Nov 1 00:58:26.872702 ignition[990]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2003901312": device or resource busy Nov 1 00:58:26.872702 ignition[990]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2003901312", trying btrfs: device or resource busy Nov 1 00:58:26.872702 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2003901312" Nov 1 00:58:26.872702 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2003901312" Nov 1 00:58:26.872702 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem2003901312" Nov 1 00:58:26.902569 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem2003901312" Nov 1 00:58:26.902569 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Nov 1 00:58:26.902569 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:58:26.902569 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 00:58:26.921504 systemd[1]: mnt-oem2003901312.mount: Deactivated successfully. Nov 1 00:58:27.082680 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Nov 1 00:58:27.260739 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:58:27.260739 ignition[990]: INFO : files: op(14): [started] processing unit "waagent.service" Nov 1 00:58:27.260739 ignition[990]: INFO : files: op(14): [finished] processing unit "waagent.service" Nov 1 00:58:27.260739 ignition[990]: INFO : files: op(15): [started] processing unit "nvidia.service" Nov 1 00:58:27.267786 ignition[990]: INFO : files: op(15): [finished] processing unit "nvidia.service" Nov 1 00:58:27.267786 ignition[990]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Nov 1 00:58:27.267786 ignition[990]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:58:27.267786 ignition[990]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:58:27.267786 ignition[990]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Nov 1 00:58:27.267786 ignition[990]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:58:27.267786 ignition[990]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:58:27.267786 ignition[990]: INFO : files: op(19): [started] setting preset to enabled for "waagent.service" Nov 1 00:58:27.267786 ignition[990]: INFO : files: op(19): [finished] setting preset to enabled for "waagent.service" Nov 1 00:58:27.267786 ignition[990]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Nov 1 00:58:27.267786 ignition[990]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Nov 1 00:58:27.267786 ignition[990]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:58:27.267786 ignition[990]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:58:27.267786 ignition[990]: INFO : files: files passed Nov 1 00:58:27.267786 ignition[990]: INFO : Ignition finished successfully Nov 1 00:58:27.266104 systemd[1]: Finished ignition-files.service. Nov 1 00:58:27.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.325565 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:58:27.338033 kernel: audit: type=1130 audit(1761958707.321:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.338030 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:58:27.342265 systemd[1]: Starting ignition-quench.service... Nov 1 00:58:27.346693 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:58:27.346802 systemd[1]: Finished ignition-quench.service. Nov 1 00:58:27.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.369404 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:58:27.373316 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:58:27.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.377661 systemd[1]: Reached target ignition-complete.target. Nov 1 00:58:27.382501 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:58:27.396971 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:58:27.397085 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:58:27.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.403141 systemd[1]: Reached target initrd-fs.target. Nov 1 00:58:27.406567 systemd[1]: Reached target initrd.target. Nov 1 00:58:27.409960 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:58:27.413451 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:58:27.424453 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:58:27.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.429090 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:58:27.439305 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:58:27.441439 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:58:27.444971 systemd[1]: Stopped target timers.target. Nov 1 00:58:27.448615 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:58:27.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.448784 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:58:27.452161 systemd[1]: Stopped target initrd.target. Nov 1 00:58:27.455628 systemd[1]: Stopped target basic.target. Nov 1 00:58:27.459218 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:58:27.462977 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:58:27.466316 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:58:27.470050 systemd[1]: Stopped target remote-fs.target. Nov 1 00:58:27.474483 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:58:27.478117 systemd[1]: Stopped target sysinit.target. Nov 1 00:58:27.481750 systemd[1]: Stopped target local-fs.target. Nov 1 00:58:27.485156 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:58:27.488561 systemd[1]: Stopped target swap.target. Nov 1 00:58:27.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.491608 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:58:27.491762 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:58:27.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.495232 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:58:27.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.498318 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:58:27.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.498493 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:58:27.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.502581 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:58:27.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.533135 iscsid[837]: iscsid shutting down. Nov 1 00:58:27.502719 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:58:27.537162 ignition[1028]: INFO : Ignition 2.14.0 Nov 1 00:58:27.537162 ignition[1028]: INFO : Stage: umount Nov 1 00:58:27.537162 ignition[1028]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:58:27.537162 ignition[1028]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:58:27.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.506426 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:58:27.556902 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:58:27.556902 ignition[1028]: INFO : umount: umount passed Nov 1 00:58:27.556902 ignition[1028]: INFO : Ignition finished successfully Nov 1 00:58:27.506559 systemd[1]: Stopped ignition-files.service. Nov 1 00:58:27.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.509696 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:58:27.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.509833 systemd[1]: Stopped flatcar-metadata-hostname.service. Nov 1 00:58:27.514673 systemd[1]: Stopping ignition-mount.service... Nov 1 00:58:27.517223 systemd[1]: Stopping iscsid.service... Nov 1 00:58:27.519854 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:58:27.521549 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:58:27.521720 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:58:27.524131 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:58:27.524286 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:58:27.528331 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:58:27.528454 systemd[1]: Stopped iscsid.service. Nov 1 00:58:27.547513 systemd[1]: Stopping iscsiuio.service... Nov 1 00:58:27.554994 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:58:27.560970 systemd[1]: Stopped iscsiuio.service. Nov 1 00:58:27.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.567596 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:58:27.568223 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:58:27.568335 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:58:27.585215 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:58:27.591914 systemd[1]: Stopped ignition-mount.service. Nov 1 00:58:27.607875 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:58:27.609866 systemd[1]: Stopped ignition-disks.service. Nov 1 00:58:27.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.613481 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:58:27.613547 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:58:27.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.617517 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:58:27.617581 systemd[1]: Stopped ignition-fetch.service. Nov 1 00:58:27.619614 systemd[1]: Stopped target network.target. Nov 1 00:58:27.621454 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:58:27.621516 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:58:27.625159 systemd[1]: Stopped target paths.target. Nov 1 00:58:27.628815 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:58:27.631872 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:58:27.642668 systemd[1]: Stopped target slices.target. Nov 1 00:58:27.645891 systemd[1]: Stopped target sockets.target. Nov 1 00:58:27.649046 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:58:27.649105 systemd[1]: Closed iscsid.socket. Nov 1 00:58:27.654044 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:58:27.654101 systemd[1]: Closed iscsiuio.socket. Nov 1 00:58:27.658703 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:58:27.660634 systemd[1]: Stopped ignition-setup.service. Nov 1 00:58:27.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.664281 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:58:27.667529 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:58:27.671984 systemd-networkd[829]: eth0: DHCPv6 lease lost Nov 1 00:58:27.672910 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:58:27.674527 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:58:27.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.680509 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:58:27.682000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:58:27.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.680634 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:58:27.689000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:58:27.688221 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:58:27.688259 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:58:27.694817 systemd[1]: Stopping network-cleanup.service... Nov 1 00:58:27.698451 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:58:27.698520 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:58:27.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.704800 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:58:27.704851 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:58:27.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.711871 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:58:27.714041 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:58:27.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.718036 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:58:27.722740 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:58:27.726324 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:58:27.728290 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:58:27.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.732817 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:58:27.732893 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:58:27.737080 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:58:27.737128 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:58:27.744258 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:58:27.744323 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:58:27.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.749649 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:58:27.749704 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:58:27.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.755044 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:58:27.755101 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:58:27.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.761580 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:58:27.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.763685 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:58:27.763760 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Nov 1 00:58:27.766012 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:58:27.787479 kernel: hv_netvsc 7c1e522e-8926-7c1e-522e-89267c1e522e eth0: Data path switched from VF: enP10017s1 Nov 1 00:58:27.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.766066 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:58:27.768155 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:58:27.768205 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:58:27.778049 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 1 00:58:27.778594 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:58:27.778680 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:58:27.805096 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:58:27.806980 systemd[1]: Stopped network-cleanup.service. Nov 1 00:58:27.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.283366 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:58:28.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.283484 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:58:28.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.286074 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:58:28.290659 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:58:28.290751 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:58:28.293618 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:58:28.310915 systemd[1]: Switching root. Nov 1 00:58:28.334811 systemd-journald[183]: Journal stopped Nov 1 00:58:43.651407 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Nov 1 00:58:43.651440 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:58:43.651452 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:58:43.651460 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:58:43.651468 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:58:43.651477 kernel: SELinux: policy capability open_perms=1 Nov 1 00:58:43.651487 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:58:43.651496 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:58:43.651504 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:58:43.651512 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:58:43.651520 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:58:43.651528 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:58:43.651536 kernel: kauditd_printk_skb: 43 callbacks suppressed Nov 1 00:58:43.651545 kernel: audit: type=1403 audit(1761958711.072:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:58:43.651561 systemd[1]: Successfully loaded SELinux policy in 296.904ms. Nov 1 00:58:43.651571 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.562ms. Nov 1 00:58:43.651582 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:58:43.651591 systemd[1]: Detected virtualization microsoft. Nov 1 00:58:43.651602 systemd[1]: Detected architecture x86-64. Nov 1 00:58:43.651614 systemd[1]: Detected first boot. Nov 1 00:58:43.651625 systemd[1]: Hostname set to . Nov 1 00:58:43.651634 systemd[1]: Initializing machine ID from random generator. Nov 1 00:58:43.651643 kernel: audit: type=1400 audit(1761958711.873:82): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:58:43.651653 kernel: audit: type=1400 audit(1761958711.887:83): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:58:43.651662 kernel: audit: type=1400 audit(1761958711.887:84): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:58:43.651674 kernel: audit: type=1334 audit(1761958711.910:85): prog-id=10 op=LOAD Nov 1 00:58:43.651683 kernel: audit: type=1334 audit(1761958711.910:86): prog-id=10 op=UNLOAD Nov 1 00:58:43.651694 kernel: audit: type=1334 audit(1761958711.915:87): prog-id=11 op=LOAD Nov 1 00:58:43.651703 kernel: audit: type=1334 audit(1761958711.915:88): prog-id=11 op=UNLOAD Nov 1 00:58:43.651712 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:58:43.651721 kernel: audit: type=1400 audit(1761958713.379:89): avc: denied { associate } for pid=1061 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:58:43.651731 kernel: audit: type=1300 audit(1761958713.379:89): arch=c000003e syscall=188 success=yes exit=0 a0=c0001058c2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1044 pid=1061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:58:43.651744 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:58:43.651754 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:58:43.651766 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:58:43.651776 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:58:43.651788 kernel: kauditd_printk_skb: 7 callbacks suppressed Nov 1 00:58:43.651797 kernel: audit: type=1334 audit(1761958723.177:91): prog-id=12 op=LOAD Nov 1 00:58:43.651808 kernel: audit: type=1334 audit(1761958723.177:92): prog-id=3 op=UNLOAD Nov 1 00:58:43.651819 kernel: audit: type=1334 audit(1761958723.182:93): prog-id=13 op=LOAD Nov 1 00:58:43.651833 kernel: audit: type=1334 audit(1761958723.186:94): prog-id=14 op=LOAD Nov 1 00:58:43.651847 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:58:43.651858 kernel: audit: type=1334 audit(1761958723.186:95): prog-id=4 op=UNLOAD Nov 1 00:58:43.651869 kernel: audit: type=1334 audit(1761958723.186:96): prog-id=5 op=UNLOAD Nov 1 00:58:43.651881 kernel: audit: type=1131 audit(1761958723.187:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.651893 systemd[1]: Stopped initrd-switch-root.service. Nov 1 00:58:43.651905 kernel: audit: type=1334 audit(1761958723.231:98): prog-id=12 op=UNLOAD Nov 1 00:58:43.651919 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:58:43.651949 kernel: audit: type=1130 audit(1761958723.237:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.651963 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:58:43.651972 kernel: audit: type=1131 audit(1761958723.237:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.651983 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:58:43.651994 systemd[1]: Created slice system-getty.slice. Nov 1 00:58:43.652005 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:58:43.652019 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:58:43.652032 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:58:43.652042 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:58:43.652054 systemd[1]: Created slice user.slice. Nov 1 00:58:43.652064 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:58:43.652076 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:58:43.652087 systemd[1]: Set up automount boot.automount. Nov 1 00:58:43.652098 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:58:43.652112 systemd[1]: Stopped target initrd-switch-root.target. Nov 1 00:58:43.652126 systemd[1]: Stopped target initrd-fs.target. Nov 1 00:58:43.652137 systemd[1]: Stopped target initrd-root-fs.target. Nov 1 00:58:43.652149 systemd[1]: Reached target integritysetup.target. Nov 1 00:58:43.652159 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:58:43.652171 systemd[1]: Reached target remote-fs.target. Nov 1 00:58:43.652181 systemd[1]: Reached target slices.target. Nov 1 00:58:43.652193 systemd[1]: Reached target swap.target. Nov 1 00:58:43.652204 systemd[1]: Reached target torcx.target. Nov 1 00:58:43.652218 systemd[1]: Reached target veritysetup.target. Nov 1 00:58:43.652231 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:58:43.652243 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:58:43.652256 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:58:43.652269 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:58:43.652283 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:58:43.652299 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:58:43.652311 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:58:43.652322 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:58:43.652335 systemd[1]: Mounting media.mount... Nov 1 00:58:43.652345 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:43.652358 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:58:43.652372 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:58:43.652382 systemd[1]: Mounting tmp.mount... Nov 1 00:58:43.652397 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:58:43.652408 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:58:43.652420 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:58:43.652432 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:58:43.652443 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:58:43.652456 systemd[1]: Starting modprobe@drm.service... Nov 1 00:58:43.652466 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:58:43.652478 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:58:43.652489 systemd[1]: Starting modprobe@loop.service... Nov 1 00:58:43.652502 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:58:43.652512 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:58:43.652522 systemd[1]: Stopped systemd-fsck-root.service. Nov 1 00:58:43.652532 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:58:43.652541 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:58:43.652551 systemd[1]: Stopped systemd-journald.service. Nov 1 00:58:43.652561 systemd[1]: Starting systemd-journald.service... Nov 1 00:58:43.652570 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:58:43.652582 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:58:43.652592 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:58:43.652602 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:58:43.652612 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:58:43.652623 systemd[1]: Stopped verity-setup.service. Nov 1 00:58:43.652635 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:43.652645 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:58:43.652656 kernel: loop: module loaded Nov 1 00:58:43.652666 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:58:43.652680 systemd[1]: Mounted media.mount. Nov 1 00:58:43.652702 systemd-journald[1151]: Journal started Nov 1 00:58:43.652752 systemd-journald[1151]: Runtime Journal (/run/log/journal/f2853ee0bcdc4fc19861b2141b9fe2bd) is 8.0M, max 159.0M, 151.0M free. Nov 1 00:58:31.072000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:58:31.873000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:58:31.887000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:58:31.887000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:58:31.910000 audit: BPF prog-id=10 op=LOAD Nov 1 00:58:31.910000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:58:31.915000 audit: BPF prog-id=11 op=LOAD Nov 1 00:58:31.915000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:58:33.379000 audit[1061]: AVC avc: denied { associate } for pid=1061 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:58:33.379000 audit[1061]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058c2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1044 pid=1061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:58:33.379000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:58:33.386000 audit[1061]: AVC avc: denied { associate } for pid=1061 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:58:33.386000 audit[1061]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000105999 a2=1ed a3=0 items=2 ppid=1044 pid=1061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:58:33.386000 audit: CWD cwd="/" Nov 1 00:58:33.386000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:33.386000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:33.386000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:58:43.177000 audit: BPF prog-id=12 op=LOAD Nov 1 00:58:43.177000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:58:43.182000 audit: BPF prog-id=13 op=LOAD Nov 1 00:58:43.186000 audit: BPF prog-id=14 op=LOAD Nov 1 00:58:43.186000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:58:43.186000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:58:43.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.231000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:58:43.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.559000 audit: BPF prog-id=15 op=LOAD Nov 1 00:58:43.560000 audit: BPF prog-id=16 op=LOAD Nov 1 00:58:43.560000 audit: BPF prog-id=17 op=LOAD Nov 1 00:58:43.560000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:58:43.560000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:58:43.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.647000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:58:43.647000 audit[1151]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff87ad7d30 a2=4000 a3=7fff87ad7dcc items=0 ppid=1 pid=1151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:58:43.647000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:58:33.333558 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:58:43.176625 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:58:33.334289 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:58:43.176638 systemd[1]: Unnecessary job was removed for dev-sda6.device. Nov 1 00:58:33.334312 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:58:43.188050 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:58:33.334352 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:33Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Nov 1 00:58:33.334363 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:33Z" level=debug msg="skipped missing lower profile" missing profile=oem Nov 1 00:58:33.334409 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:33Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Nov 1 00:58:33.334424 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:33Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Nov 1 00:58:33.334649 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:33Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Nov 1 00:58:33.334706 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:58:33.334722 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:58:33.365597 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Nov 1 00:58:33.365642 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Nov 1 00:58:33.365668 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Nov 1 00:58:33.365682 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Nov 1 00:58:33.365701 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Nov 1 00:58:33.365713 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Nov 1 00:58:41.866828 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:41Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:58:41.867412 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:41Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:58:41.867534 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:41Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:58:41.867837 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:41Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:58:41.867907 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:41Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Nov 1 00:58:41.867988 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2025-11-01T00:58:41Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Nov 1 00:58:43.658075 systemd[1]: Started systemd-journald.service. Nov 1 00:58:43.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.658910 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:58:43.660880 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:58:43.663256 systemd[1]: Mounted tmp.mount. Nov 1 00:58:43.665195 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:58:43.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.667665 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:58:43.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.669914 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:58:43.670109 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:58:43.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.672391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:58:43.672541 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:58:43.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.674751 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:58:43.674892 systemd[1]: Finished modprobe@drm.service. Nov 1 00:58:43.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.677524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:58:43.677684 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:58:43.679999 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:58:43.680162 systemd[1]: Finished modprobe@loop.service. Nov 1 00:58:43.682386 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:58:43.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.685050 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:58:43.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.691568 kernel: fuse: init (API version 7.34) Nov 1 00:58:43.688221 systemd[1]: Reached target network-pre.target. Nov 1 00:58:43.693129 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:58:43.695092 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:58:43.711275 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:58:43.714591 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:58:43.716863 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:58:43.718111 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:58:43.720031 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:58:43.721310 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:58:43.726670 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:58:43.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.727027 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:58:43.729266 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:58:43.733406 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:58:43.736405 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:58:43.739764 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:58:43.746981 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:58:43.758741 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:58:43.760499 systemd-journald[1151]: Time spent on flushing to /var/log/journal/f2853ee0bcdc4fc19861b2141b9fe2bd is 34.661ms for 1157 entries. Nov 1 00:58:43.760499 systemd-journald[1151]: System Journal (/var/log/journal/f2853ee0bcdc4fc19861b2141b9fe2bd) is 8.0M, max 2.6G, 2.6G free. Nov 1 00:58:43.843387 systemd-journald[1151]: Received client request to flush runtime journal. Nov 1 00:58:43.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:43.764884 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:58:43.820454 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:58:43.844565 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:58:43.829391 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:58:43.833030 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:58:43.844387 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:58:43.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:44.357978 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:58:44.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:44.361677 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:58:44.769125 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:58:44.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:44.901789 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:58:44.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:44.903000 audit: BPF prog-id=18 op=LOAD Nov 1 00:58:44.903000 audit: BPF prog-id=19 op=LOAD Nov 1 00:58:44.903000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:58:44.903000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:58:44.905367 systemd[1]: Starting systemd-udevd.service... Nov 1 00:58:44.923994 systemd-udevd[1190]: Using default interface naming scheme 'v252'. Nov 1 00:58:45.233732 systemd[1]: Started systemd-udevd.service. Nov 1 00:58:45.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:45.236000 audit: BPF prog-id=20 op=LOAD Nov 1 00:58:45.241115 systemd[1]: Starting systemd-networkd.service... Nov 1 00:58:45.276668 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Nov 1 00:58:45.307000 audit: BPF prog-id=21 op=LOAD Nov 1 00:58:45.307000 audit: BPF prog-id=22 op=LOAD Nov 1 00:58:45.307000 audit: BPF prog-id=23 op=LOAD Nov 1 00:58:45.309885 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:58:45.373967 systemd[1]: Started systemd-userdbd.service. Nov 1 00:58:45.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:45.409621 kernel: hv_utils: Registering HyperV Utility Driver Nov 1 00:58:45.409718 kernel: hv_vmbus: registering driver hv_utils Nov 1 00:58:45.386000 audit[1202]: AVC avc: denied { confidentiality } for pid=1202 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:58:45.419953 kernel: hv_vmbus: registering driver hyperv_fb Nov 1 00:58:45.420028 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:58:45.443680 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 1 00:58:45.443790 kernel: hv_vmbus: registering driver hv_balloon Nov 1 00:58:45.443812 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 1 00:58:45.455526 kernel: hv_utils: Shutdown IC version 3.2 Nov 1 00:58:45.455601 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 1 00:58:45.455618 kernel: hv_utils: Heartbeat IC version 3.0 Nov 1 00:58:45.461490 kernel: Console: switching to colour dummy device 80x25 Nov 1 00:58:45.461566 kernel: hv_utils: TimeSync IC version 4.0 Nov 1 00:58:46.187933 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 00:58:45.386000 audit[1202]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=561113e56c00 a1=f83c a2=7fe6b6d17bc5 a3=5 items=12 ppid=1190 pid=1202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:58:45.386000 audit: CWD cwd="/" Nov 1 00:58:45.386000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:45.386000 audit: PATH item=1 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:45.386000 audit: PATH item=2 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:45.386000 audit: PATH item=3 name=(null) inode=15419 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:45.386000 audit: PATH item=4 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:45.386000 audit: PATH item=5 name=(null) inode=15420 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:45.386000 audit: PATH item=6 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:45.386000 audit: PATH item=7 name=(null) inode=15421 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:45.386000 audit: PATH item=8 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:45.386000 audit: PATH item=9 name=(null) inode=15422 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:45.386000 audit: PATH item=10 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:45.386000 audit: PATH item=11 name=(null) inode=15423 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:45.386000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:58:46.427263 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Nov 1 00:58:46.433951 systemd-networkd[1196]: lo: Link UP Nov 1 00:58:46.433965 systemd-networkd[1196]: lo: Gained carrier Nov 1 00:58:46.434644 systemd-networkd[1196]: Enumeration completed Nov 1 00:58:46.434749 systemd[1]: Started systemd-networkd.service. Nov 1 00:58:46.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:46.441390 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:58:46.446403 systemd-networkd[1196]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:58:46.455122 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:58:46.507549 kernel: mlx5_core 2721:00:02.0 enP10017s1: Link up Nov 1 00:58:46.507902 kernel: buffer_size[0]=0 is not enough for lossless buffer Nov 1 00:58:46.528271 kernel: hv_netvsc 7c1e522e-8926-7c1e-522e-89267c1e522e eth0: Data path switched to VF: enP10017s1 Nov 1 00:58:46.528383 systemd-networkd[1196]: enP10017s1: Link UP Nov 1 00:58:46.528574 systemd-networkd[1196]: eth0: Link UP Nov 1 00:58:46.528581 systemd-networkd[1196]: eth0: Gained carrier Nov 1 00:58:46.534604 systemd-networkd[1196]: enP10017s1: Gained carrier Nov 1 00:58:46.541626 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:58:46.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:46.545618 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:58:46.564397 systemd-networkd[1196]: eth0: DHCPv4 address 10.200.4.7/24, gateway 10.200.4.1 acquired from 168.63.129.16 Nov 1 00:58:46.928511 lvm[1268]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:58:46.973444 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:58:46.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:46.976172 systemd[1]: Reached target cryptsetup.target. Nov 1 00:58:46.979724 systemd[1]: Starting lvm2-activation.service... Nov 1 00:58:46.984439 lvm[1269]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:58:47.003393 systemd[1]: Finished lvm2-activation.service. Nov 1 00:58:47.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:47.005804 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:58:47.007932 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:58:47.007969 systemd[1]: Reached target local-fs.target. Nov 1 00:58:47.009971 systemd[1]: Reached target machines.target. Nov 1 00:58:47.013312 systemd[1]: Starting ldconfig.service... Nov 1 00:58:47.015437 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:58:47.015542 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:58:47.016847 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:58:47.019882 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:58:47.023639 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:58:47.026697 systemd[1]: Starting systemd-sysext.service... Nov 1 00:58:47.079808 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1271 (bootctl) Nov 1 00:58:47.081308 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:58:47.550531 systemd-networkd[1196]: eth0: Gained IPv6LL Nov 1 00:58:47.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:47.552378 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:58:47.563803 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:58:47.602424 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:58:47.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:47.609330 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:58:47.609590 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:58:47.673264 kernel: loop0: detected capacity change from 0 to 219144 Nov 1 00:58:47.739264 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:58:47.756277 kernel: loop1: detected capacity change from 0 to 219144 Nov 1 00:58:47.771699 (sd-sysext)[1283]: Using extensions 'kubernetes'. Nov 1 00:58:47.772197 (sd-sysext)[1283]: Merged extensions into '/usr'. Nov 1 00:58:47.791261 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:47.792856 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:58:47.794971 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:58:47.798967 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:58:47.802340 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:58:47.805506 systemd[1]: Starting modprobe@loop.service... Nov 1 00:58:47.807300 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:58:47.807460 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:58:47.807603 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:47.808587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:58:47.808751 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:58:47.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:47.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:47.811394 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:58:47.811545 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:58:47.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:47.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:47.814434 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:58:47.814586 systemd[1]: Finished modprobe@loop.service. Nov 1 00:58:47.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:47.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:47.816824 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:58:47.816958 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:58:48.010019 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:58:48.013466 systemd[1]: Finished systemd-sysext.service. Nov 1 00:58:48.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.017221 systemd[1]: Starting ensure-sysext.service... Nov 1 00:58:48.021370 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:58:48.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.026925 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:58:48.029358 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:58:48.032740 systemd[1]: Reloading. Nov 1 00:58:48.042722 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:58:48.072880 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:58:48.075830 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:58:48.106942 /usr/lib/systemd/system-generators/torcx-generator[1310]: time="2025-11-01T00:58:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:58:48.108701 /usr/lib/systemd/system-generators/torcx-generator[1310]: time="2025-11-01T00:58:48Z" level=info msg="torcx already run" Nov 1 00:58:48.202465 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:58:48.202488 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:58:48.219369 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:58:48.274376 systemd-fsck[1280]: fsck.fat 4.2 (2021-01-31) Nov 1 00:58:48.274376 systemd-fsck[1280]: /dev/sda1: 790 files, 120773/258078 clusters Nov 1 00:58:48.291000 audit: BPF prog-id=24 op=LOAD Nov 1 00:58:48.291000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:58:48.292000 audit: BPF prog-id=25 op=LOAD Nov 1 00:58:48.292000 audit: BPF prog-id=26 op=LOAD Nov 1 00:58:48.292000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:58:48.293000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:58:48.294000 audit: BPF prog-id=27 op=LOAD Nov 1 00:58:48.294000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:58:48.294000 audit: BPF prog-id=28 op=LOAD Nov 1 00:58:48.294000 audit: BPF prog-id=29 op=LOAD Nov 1 00:58:48.294000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:58:48.294000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:58:48.295000 audit: BPF prog-id=30 op=LOAD Nov 1 00:58:48.295000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:58:48.295000 audit: BPF prog-id=31 op=LOAD Nov 1 00:58:48.295000 audit: BPF prog-id=32 op=LOAD Nov 1 00:58:48.295000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:58:48.295000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:58:48.299924 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:58:48.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.308306 systemd[1]: Mounting boot.mount... Nov 1 00:58:48.320007 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:48.320373 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:58:48.321971 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:58:48.325918 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:58:48.329510 systemd[1]: Starting modprobe@loop.service... Nov 1 00:58:48.331355 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:58:48.331538 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:58:48.331695 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:48.332894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:58:48.333074 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:58:48.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.336992 systemd[1]: Mounted boot.mount. Nov 1 00:58:48.339785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:58:48.339949 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:58:48.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.342674 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:58:48.342832 systemd[1]: Finished modprobe@loop.service. Nov 1 00:58:48.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.351534 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:58:48.353222 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:58:48.356776 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:58:48.360196 systemd[1]: Starting modprobe@loop.service... Nov 1 00:58:48.361973 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:58:48.362188 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:58:48.364218 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:58:48.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.366961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:58:48.367114 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:58:48.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.369777 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:58:48.369930 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:58:48.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.372644 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:58:48.372794 systemd[1]: Finished modprobe@loop.service. Nov 1 00:58:48.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.377917 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:58:48.379637 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:58:48.382940 systemd[1]: Starting modprobe@drm.service... Nov 1 00:58:48.386022 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:58:48.389412 systemd[1]: Starting modprobe@loop.service... Nov 1 00:58:48.391545 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:58:48.391750 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:58:48.392904 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:58:48.393073 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:58:48.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.395645 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:58:48.395795 systemd[1]: Finished modprobe@drm.service. Nov 1 00:58:48.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.398140 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:58:48.398444 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:58:48.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.401095 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:58:48.401255 systemd[1]: Finished modprobe@loop.service. Nov 1 00:58:48.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.403846 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:58:48.403979 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:58:48.405386 systemd[1]: Finished ensure-sysext.service. Nov 1 00:58:48.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.608309 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:48.608349 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:48.841834 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:58:48.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.845753 systemd[1]: Starting audit-rules.service... Nov 1 00:58:48.849101 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:58:48.852637 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:58:48.854000 audit: BPF prog-id=33 op=LOAD Nov 1 00:58:48.857114 systemd[1]: Starting systemd-resolved.service... Nov 1 00:58:48.858000 audit: BPF prog-id=34 op=LOAD Nov 1 00:58:48.861338 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:58:48.865368 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:58:48.885000 audit[1392]: SYSTEM_BOOT pid=1392 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.892036 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:58:48.900548 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:58:48.902831 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:58:48.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.915987 kernel: kauditd_printk_skb: 124 callbacks suppressed Nov 1 00:58:48.916119 kernel: audit: type=1130 audit(1761958728.901:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.952564 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:58:48.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:48.954813 systemd[1]: Reached target time-set.target. Nov 1 00:58:48.966263 kernel: audit: type=1130 audit(1761958728.953:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:49.033266 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:58:49.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:49.047303 kernel: audit: type=1130 audit(1761958729.034:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:49.066075 systemd-resolved[1390]: Positive Trust Anchors: Nov 1 00:58:49.066095 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:58:49.066134 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:58:49.104841 systemd-timesyncd[1391]: Contacted time server 109.74.197.50:123 (0.flatcar.pool.ntp.org). Nov 1 00:58:49.104994 systemd-timesyncd[1391]: Initial clock synchronization to Sat 2025-11-01 00:58:49.105049 UTC. Nov 1 00:58:49.168586 systemd-resolved[1390]: Using system hostname 'ci-3510.3.8-n-e458e05b0a'. Nov 1 00:58:49.170351 systemd[1]: Started systemd-resolved.service. Nov 1 00:58:49.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:49.172815 systemd[1]: Reached target network.target. Nov 1 00:58:49.183945 kernel: audit: type=1130 audit(1761958729.171:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:49.183897 systemd[1]: Reached target network-online.target. Nov 1 00:58:49.185695 systemd[1]: Reached target nss-lookup.target. Nov 1 00:58:49.225000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:58:49.228202 systemd[1]: Finished audit-rules.service. Nov 1 00:58:49.225000 audit[1408]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe1bfbb1e0 a2=420 a3=0 items=0 ppid=1387 pid=1408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:58:49.225000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:58:49.233629 augenrules[1408]: No rules Nov 1 00:58:49.234266 kernel: audit: type=1305 audit(1761958729.225:212): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:58:49.234317 kernel: audit: type=1300 audit(1761958729.225:212): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe1bfbb1e0 a2=420 a3=0 items=0 ppid=1387 pid=1408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:58:49.234346 kernel: audit: type=1327 audit(1761958729.225:212): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:58:55.138126 ldconfig[1270]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:58:55.147391 systemd[1]: Finished ldconfig.service. Nov 1 00:58:55.151232 systemd[1]: Starting systemd-update-done.service... Nov 1 00:58:55.161098 systemd[1]: Finished systemd-update-done.service. Nov 1 00:58:55.163215 systemd[1]: Reached target sysinit.target. Nov 1 00:58:55.164960 systemd[1]: Started motdgen.path. Nov 1 00:58:55.166339 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:58:55.168939 systemd[1]: Started logrotate.timer. Nov 1 00:58:55.170530 systemd[1]: Started mdadm.timer. Nov 1 00:58:55.171823 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:58:55.173478 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:58:55.173514 systemd[1]: Reached target paths.target. Nov 1 00:58:55.174952 systemd[1]: Reached target timers.target. Nov 1 00:58:55.176797 systemd[1]: Listening on dbus.socket. Nov 1 00:58:55.179360 systemd[1]: Starting docker.socket... Nov 1 00:58:55.183699 systemd[1]: Listening on sshd.socket. Nov 1 00:58:55.185256 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:58:55.185776 systemd[1]: Listening on docker.socket. Nov 1 00:58:55.187636 systemd[1]: Reached target sockets.target. Nov 1 00:58:55.189342 systemd[1]: Reached target basic.target. Nov 1 00:58:55.192476 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:58:55.192518 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:58:55.193761 systemd[1]: Starting containerd.service... Nov 1 00:58:55.196881 systemd[1]: Starting dbus.service... Nov 1 00:58:55.199762 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:58:55.202744 systemd[1]: Starting extend-filesystems.service... Nov 1 00:58:55.204329 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:58:55.219006 systemd[1]: Starting kubelet.service... Nov 1 00:58:55.222638 systemd[1]: Starting motdgen.service... Nov 1 00:58:55.226076 systemd[1]: Started nvidia.service. Nov 1 00:58:55.229992 systemd[1]: Starting prepare-helm.service... Nov 1 00:58:55.233874 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:58:55.237106 systemd[1]: Starting sshd-keygen.service... Nov 1 00:58:55.241855 systemd[1]: Starting systemd-logind.service... Nov 1 00:58:55.243660 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:58:55.243772 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:58:55.244290 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:58:55.246041 systemd[1]: Starting update-engine.service... Nov 1 00:58:55.249421 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:58:55.275566 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:58:55.275819 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:58:55.287879 jq[1432]: true Nov 1 00:58:55.289253 jq[1418]: false Nov 1 00:58:55.288698 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:58:55.288934 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:58:55.301110 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:58:55.301348 systemd[1]: Finished motdgen.service. Nov 1 00:58:55.303057 extend-filesystems[1419]: Found loop1 Nov 1 00:58:55.305504 extend-filesystems[1419]: Found sda Nov 1 00:58:55.306969 extend-filesystems[1419]: Found sda1 Nov 1 00:58:55.306969 extend-filesystems[1419]: Found sda2 Nov 1 00:58:55.306969 extend-filesystems[1419]: Found sda3 Nov 1 00:58:55.314871 extend-filesystems[1419]: Found usr Nov 1 00:58:55.314871 extend-filesystems[1419]: Found sda4 Nov 1 00:58:55.314871 extend-filesystems[1419]: Found sda6 Nov 1 00:58:55.314871 extend-filesystems[1419]: Found sda7 Nov 1 00:58:55.314871 extend-filesystems[1419]: Found sda9 Nov 1 00:58:55.314871 extend-filesystems[1419]: Checking size of /dev/sda9 Nov 1 00:58:55.356325 jq[1445]: true Nov 1 00:58:55.403829 env[1442]: time="2025-11-01T00:58:55.403294524Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:58:55.409375 extend-filesystems[1419]: Old size kept for /dev/sda9 Nov 1 00:58:55.412841 extend-filesystems[1419]: Found sr0 Nov 1 00:58:55.412025 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:58:55.412257 systemd[1]: Finished extend-filesystems.service. Nov 1 00:58:55.425975 systemd-logind[1430]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:58:55.427565 systemd-logind[1430]: New seat seat0. Nov 1 00:58:55.474695 tar[1435]: linux-amd64/LICENSE Nov 1 00:58:55.474695 tar[1435]: linux-amd64/helm Nov 1 00:58:55.543936 env[1442]: time="2025-11-01T00:58:55.543882219Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:58:55.546258 dbus-daemon[1417]: [system] SELinux support is enabled Nov 1 00:58:55.551410 dbus-daemon[1417]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 00:58:55.546462 systemd[1]: Started dbus.service. Nov 1 00:58:55.550870 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:58:55.550899 systemd[1]: Reached target system-config.target. Nov 1 00:58:55.553651 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:58:55.553673 systemd[1]: Reached target user-config.target. Nov 1 00:58:55.555805 systemd[1]: Started systemd-logind.service. Nov 1 00:58:55.561528 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:58:55.563881 env[1442]: time="2025-11-01T00:58:55.563828902Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:58:55.565696 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:58:55.568830 env[1442]: time="2025-11-01T00:58:55.568783772Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:58:55.568952 systemd[1]: nvidia.service: Deactivated successfully. Nov 1 00:58:55.569083 env[1442]: time="2025-11-01T00:58:55.568952175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:58:55.569504 env[1442]: time="2025-11-01T00:58:55.569477982Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:58:55.569618 env[1442]: time="2025-11-01T00:58:55.569602184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:58:55.569708 env[1442]: time="2025-11-01T00:58:55.569693385Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:58:55.569793 env[1442]: time="2025-11-01T00:58:55.569778687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:58:55.569972 env[1442]: time="2025-11-01T00:58:55.569953389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:58:55.570449 env[1442]: time="2025-11-01T00:58:55.570427296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:58:55.571500 env[1442]: time="2025-11-01T00:58:55.571471811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:58:55.571598 env[1442]: time="2025-11-01T00:58:55.571582212Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:58:55.571730 env[1442]: time="2025-11-01T00:58:55.571710414Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:58:55.571822 env[1442]: time="2025-11-01T00:58:55.571808215Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:58:55.590969 env[1442]: time="2025-11-01T00:58:55.588893758Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:58:55.590969 env[1442]: time="2025-11-01T00:58:55.588960759Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:58:55.590969 env[1442]: time="2025-11-01T00:58:55.588980659Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:58:55.590969 env[1442]: time="2025-11-01T00:58:55.589053360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:58:55.590969 env[1442]: time="2025-11-01T00:58:55.589074260Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:58:55.590969 env[1442]: time="2025-11-01T00:58:55.589105561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:58:55.590969 env[1442]: time="2025-11-01T00:58:55.589125961Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:58:55.590969 env[1442]: time="2025-11-01T00:58:55.589145061Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:58:55.590969 env[1442]: time="2025-11-01T00:58:55.589177662Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:58:55.590969 env[1442]: time="2025-11-01T00:58:55.589196862Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:58:55.590969 env[1442]: time="2025-11-01T00:58:55.589214862Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:58:55.590969 env[1442]: time="2025-11-01T00:58:55.589232463Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:58:55.590969 env[1442]: time="2025-11-01T00:58:55.589424865Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:58:55.590969 env[1442]: time="2025-11-01T00:58:55.589538267Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:58:55.591552 env[1442]: time="2025-11-01T00:58:55.589913772Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:58:55.591552 env[1442]: time="2025-11-01T00:58:55.589970373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:58:55.591552 env[1442]: time="2025-11-01T00:58:55.589992773Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:58:55.591552 env[1442]: time="2025-11-01T00:58:55.590066474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:58:55.591552 env[1442]: time="2025-11-01T00:58:55.590086575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:58:55.591552 env[1442]: time="2025-11-01T00:58:55.590116475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:58:55.591552 env[1442]: time="2025-11-01T00:58:55.590133575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:58:55.591552 env[1442]: time="2025-11-01T00:58:55.590150976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:58:55.591552 env[1442]: time="2025-11-01T00:58:55.590168276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:58:55.591552 env[1442]: time="2025-11-01T00:58:55.590195076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:58:55.591552 env[1442]: time="2025-11-01T00:58:55.590210776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:58:55.591552 env[1442]: time="2025-11-01T00:58:55.590229977Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:58:55.591552 env[1442]: time="2025-11-01T00:58:55.590521081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:58:55.591552 env[1442]: time="2025-11-01T00:58:55.590545681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:58:55.591552 env[1442]: time="2025-11-01T00:58:55.590610482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:58:55.592059 env[1442]: time="2025-11-01T00:58:55.590630482Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:58:55.592059 env[1442]: time="2025-11-01T00:58:55.590662583Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:58:55.592059 env[1442]: time="2025-11-01T00:58:55.590679383Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:58:55.592059 env[1442]: time="2025-11-01T00:58:55.590708284Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:58:55.592059 env[1442]: time="2025-11-01T00:58:55.590761284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:58:55.594048 env[1442]: time="2025-11-01T00:58:55.592391107Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:58:55.594048 env[1442]: time="2025-11-01T00:58:55.592503109Z" level=info msg="Connect containerd service" Nov 1 00:58:55.594048 env[1442]: time="2025-11-01T00:58:55.592546610Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:58:55.594048 env[1442]: time="2025-11-01T00:58:55.593394522Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:58:55.594048 env[1442]: time="2025-11-01T00:58:55.593581924Z" level=info msg="Start subscribing containerd event" Nov 1 00:58:55.594048 env[1442]: time="2025-11-01T00:58:55.593631725Z" level=info msg="Start recovering state" Nov 1 00:58:55.594048 env[1442]: time="2025-11-01T00:58:55.593714226Z" level=info msg="Start event monitor" Nov 1 00:58:55.594048 env[1442]: time="2025-11-01T00:58:55.593735726Z" level=info msg="Start snapshots syncer" Nov 1 00:58:55.594048 env[1442]: time="2025-11-01T00:58:55.593747227Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:58:55.594048 env[1442]: time="2025-11-01T00:58:55.593757027Z" level=info msg="Start streaming server" Nov 1 00:58:55.647894 env[1442]: time="2025-11-01T00:58:55.594404736Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:58:55.647894 env[1442]: time="2025-11-01T00:58:55.594463837Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:58:55.647894 env[1442]: time="2025-11-01T00:58:55.601369935Z" level=info msg="containerd successfully booted in 0.200336s" Nov 1 00:58:55.601467 systemd[1]: Started containerd.service. Nov 1 00:58:56.245651 update_engine[1431]: I1101 00:58:56.243880 1431 main.cc:92] Flatcar Update Engine starting Nov 1 00:58:56.294477 systemd[1]: Started update-engine.service. Nov 1 00:58:56.296933 update_engine[1431]: I1101 00:58:56.295742 1431 update_check_scheduler.cc:74] Next update check in 10m34s Nov 1 00:58:56.300845 systemd[1]: Started locksmithd.service. Nov 1 00:58:56.361487 tar[1435]: linux-amd64/README.md Nov 1 00:58:56.368950 systemd[1]: Finished prepare-helm.service. Nov 1 00:58:56.773155 sshd_keygen[1449]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:58:56.802435 systemd[1]: Finished sshd-keygen.service. Nov 1 00:58:56.806491 systemd[1]: Starting issuegen.service... Nov 1 00:58:56.810142 systemd[1]: Started waagent.service. Nov 1 00:58:56.820763 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:58:56.820985 systemd[1]: Finished issuegen.service. Nov 1 00:58:56.824717 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:58:56.835905 systemd[1]: Started kubelet.service. Nov 1 00:58:56.846160 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:58:56.850018 systemd[1]: Started getty@tty1.service. Nov 1 00:58:56.853643 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 00:58:56.855898 systemd[1]: Reached target getty.target. Nov 1 00:58:56.858129 systemd[1]: Reached target multi-user.target. Nov 1 00:58:56.861717 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:58:56.876725 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:58:56.876945 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:58:56.879287 systemd[1]: Startup finished in 793ms (firmware) + 15.745s (loader) + 885ms (kernel) + 13.833s (initrd) + 25.592s (userspace) = 56.850s. Nov 1 00:58:57.236770 login[1540]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 00:58:57.237151 login[1541]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 00:58:57.265420 systemd[1]: Created slice user-500.slice. Nov 1 00:58:57.267085 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:58:57.275866 systemd-logind[1430]: New session 1 of user core. Nov 1 00:58:57.281288 systemd-logind[1430]: New session 2 of user core. Nov 1 00:58:57.286322 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:58:57.290147 systemd[1]: Starting user@500.service... Nov 1 00:58:57.307551 (systemd)[1550]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:58:57.382268 kubelet[1537]: E1101 00:58:57.381203 1537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:58:57.384264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:58:57.384444 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:58:57.384796 systemd[1]: kubelet.service: Consumed 1.017s CPU time. Nov 1 00:58:57.564462 systemd[1550]: Queued start job for default target default.target. Nov 1 00:58:57.565215 systemd[1550]: Reached target paths.target. Nov 1 00:58:57.565259 systemd[1550]: Reached target sockets.target. Nov 1 00:58:57.565278 systemd[1550]: Reached target timers.target. Nov 1 00:58:57.565293 systemd[1550]: Reached target basic.target. Nov 1 00:58:57.565417 systemd[1]: Started user@500.service. Nov 1 00:58:57.566640 systemd[1]: Started session-1.scope. Nov 1 00:58:57.567466 systemd[1]: Started session-2.scope. Nov 1 00:58:57.568391 systemd[1550]: Reached target default.target. Nov 1 00:58:57.568621 systemd[1550]: Startup finished in 252ms. Nov 1 00:58:57.804431 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:59:02.331164 waagent[1529]: 2025-11-01T00:59:02.331041Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Nov 1 00:59:02.335542 waagent[1529]: 2025-11-01T00:59:02.335453Z INFO Daemon Daemon OS: flatcar 3510.3.8 Nov 1 00:59:02.338253 waagent[1529]: 2025-11-01T00:59:02.338178Z INFO Daemon Daemon Python: 3.9.16 Nov 1 00:59:02.340976 waagent[1529]: 2025-11-01T00:59:02.340893Z INFO Daemon Daemon Run daemon Nov 1 00:59:02.343774 waagent[1529]: 2025-11-01T00:59:02.343691Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Nov 1 00:59:02.355500 waagent[1529]: 2025-11-01T00:59:02.355358Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Nov 1 00:59:02.364121 waagent[1529]: 2025-11-01T00:59:02.363981Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 1 00:59:02.368792 waagent[1529]: 2025-11-01T00:59:02.368701Z INFO Daemon Daemon cloud-init is enabled: False Nov 1 00:59:02.371698 waagent[1529]: 2025-11-01T00:59:02.371608Z INFO Daemon Daemon Using waagent for provisioning Nov 1 00:59:02.375027 waagent[1529]: 2025-11-01T00:59:02.374947Z INFO Daemon Daemon Activate resource disk Nov 1 00:59:02.377451 waagent[1529]: 2025-11-01T00:59:02.377380Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 1 00:59:02.387580 waagent[1529]: 2025-11-01T00:59:02.387478Z INFO Daemon Daemon Found device: None Nov 1 00:59:02.390220 waagent[1529]: 2025-11-01T00:59:02.390126Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 1 00:59:02.394227 waagent[1529]: 2025-11-01T00:59:02.394140Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 1 00:59:02.400122 waagent[1529]: 2025-11-01T00:59:02.400039Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 1 00:59:02.403653 waagent[1529]: 2025-11-01T00:59:02.403567Z INFO Daemon Daemon Running default provisioning handler Nov 1 00:59:02.418527 waagent[1529]: 2025-11-01T00:59:02.418373Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Nov 1 00:59:02.426419 waagent[1529]: 2025-11-01T00:59:02.426279Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 1 00:59:02.430987 waagent[1529]: 2025-11-01T00:59:02.430892Z INFO Daemon Daemon cloud-init is enabled: False Nov 1 00:59:02.433500 waagent[1529]: 2025-11-01T00:59:02.433415Z INFO Daemon Daemon Copying ovf-env.xml Nov 1 00:59:02.499127 waagent[1529]: 2025-11-01T00:59:02.494340Z INFO Daemon Daemon Successfully mounted dvd Nov 1 00:59:02.559664 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 1 00:59:02.594674 waagent[1529]: 2025-11-01T00:59:02.594445Z INFO Daemon Daemon Detect protocol endpoint Nov 1 00:59:02.597123 waagent[1529]: 2025-11-01T00:59:02.595941Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 1 00:59:02.600008 waagent[1529]: 2025-11-01T00:59:02.599925Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 1 00:59:02.603196 waagent[1529]: 2025-11-01T00:59:02.603123Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 1 00:59:02.605967 waagent[1529]: 2025-11-01T00:59:02.605893Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 1 00:59:02.608464 waagent[1529]: 2025-11-01T00:59:02.608398Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 1 00:59:02.740466 waagent[1529]: 2025-11-01T00:59:02.740378Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 1 00:59:02.747587 waagent[1529]: 2025-11-01T00:59:02.741422Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 1 00:59:02.747587 waagent[1529]: 2025-11-01T00:59:02.742101Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 1 00:59:03.050352 waagent[1529]: 2025-11-01T00:59:03.050175Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 1 00:59:03.060979 waagent[1529]: 2025-11-01T00:59:03.060886Z INFO Daemon Daemon Forcing an update of the goal state.. Nov 1 00:59:03.065613 waagent[1529]: 2025-11-01T00:59:03.061319Z INFO Daemon Daemon Fetching goal state [incarnation 1] Nov 1 00:59:03.206719 waagent[1529]: 2025-11-01T00:59:03.206572Z INFO Daemon Daemon Found private key matching thumbprint 2ABC957B2D1AE22B46A27683CC78BDA9A7593AE4 Nov 1 00:59:03.212538 waagent[1529]: 2025-11-01T00:59:03.207181Z INFO Daemon Daemon Fetch goal state completed Nov 1 00:59:03.229526 waagent[1529]: 2025-11-01T00:59:03.229449Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: a55d7463-7c05-4867-9552-7f667261f124 New eTag: 8251343524866198395] Nov 1 00:59:03.237927 waagent[1529]: 2025-11-01T00:59:03.230624Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Nov 1 00:59:03.278428 waagent[1529]: 2025-11-01T00:59:03.278318Z INFO Daemon Daemon Starting provisioning Nov 1 00:59:03.280400 waagent[1529]: 2025-11-01T00:59:03.278811Z INFO Daemon Daemon Handle ovf-env.xml. Nov 1 00:59:03.280400 waagent[1529]: 2025-11-01T00:59:03.279639Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-e458e05b0a] Nov 1 00:59:03.304838 waagent[1529]: 2025-11-01T00:59:03.304662Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-e458e05b0a] Nov 1 00:59:03.308563 waagent[1529]: 2025-11-01T00:59:03.308447Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 1 00:59:03.312040 waagent[1529]: 2025-11-01T00:59:03.311952Z INFO Daemon Daemon Primary interface is [eth0] Nov 1 00:59:03.326761 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Nov 1 00:59:03.327021 systemd[1]: Stopped systemd-networkd-wait-online.service. Nov 1 00:59:03.327093 systemd[1]: Stopping systemd-networkd-wait-online.service... Nov 1 00:59:03.327481 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:59:03.332294 systemd-networkd[1196]: eth0: DHCPv6 lease lost Nov 1 00:59:03.333735 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:59:03.333941 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:59:03.336668 systemd[1]: Starting systemd-networkd.service... Nov 1 00:59:03.369756 systemd-networkd[1587]: enP10017s1: Link UP Nov 1 00:59:03.369765 systemd-networkd[1587]: enP10017s1: Gained carrier Nov 1 00:59:03.371366 systemd-networkd[1587]: eth0: Link UP Nov 1 00:59:03.371375 systemd-networkd[1587]: eth0: Gained carrier Nov 1 00:59:03.371839 systemd-networkd[1587]: lo: Link UP Nov 1 00:59:03.371849 systemd-networkd[1587]: lo: Gained carrier Nov 1 00:59:03.372174 systemd-networkd[1587]: eth0: Gained IPv6LL Nov 1 00:59:03.372495 systemd-networkd[1587]: Enumeration completed Nov 1 00:59:03.374347 waagent[1529]: 2025-11-01T00:59:03.374003Z INFO Daemon Daemon Create user account if not exists Nov 1 00:59:03.372631 systemd[1]: Started systemd-networkd.service. Nov 1 00:59:03.377015 waagent[1529]: 2025-11-01T00:59:03.375424Z INFO Daemon Daemon User core already exists, skip useradd Nov 1 00:59:03.377015 waagent[1529]: 2025-11-01T00:59:03.376144Z INFO Daemon Daemon Configure sudoer Nov 1 00:59:03.377668 waagent[1529]: 2025-11-01T00:59:03.377604Z INFO Daemon Daemon Configure sshd Nov 1 00:59:03.378955 waagent[1529]: 2025-11-01T00:59:03.378899Z INFO Daemon Daemon Deploy ssh public key. Nov 1 00:59:03.384020 systemd-networkd[1587]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:59:03.385510 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:59:03.420350 systemd-networkd[1587]: eth0: DHCPv4 address 10.200.4.7/24, gateway 10.200.4.1 acquired from 168.63.129.16 Nov 1 00:59:03.423803 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:59:04.490218 waagent[1529]: 2025-11-01T00:59:04.490099Z INFO Daemon Daemon Provisioning complete Nov 1 00:59:04.504899 waagent[1529]: 2025-11-01T00:59:04.504810Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 1 00:59:04.508256 waagent[1529]: 2025-11-01T00:59:04.508157Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 1 00:59:04.513661 waagent[1529]: 2025-11-01T00:59:04.513580Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Nov 1 00:59:04.785152 waagent[1594]: 2025-11-01T00:59:04.784974Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Nov 1 00:59:04.785896 waagent[1594]: 2025-11-01T00:59:04.785828Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:59:04.786054 waagent[1594]: 2025-11-01T00:59:04.786000Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:59:04.797102 waagent[1594]: 2025-11-01T00:59:04.797016Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Nov 1 00:59:04.797294 waagent[1594]: 2025-11-01T00:59:04.797224Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Nov 1 00:59:04.858063 waagent[1594]: 2025-11-01T00:59:04.857928Z INFO ExtHandler ExtHandler Found private key matching thumbprint 2ABC957B2D1AE22B46A27683CC78BDA9A7593AE4 Nov 1 00:59:04.858410 waagent[1594]: 2025-11-01T00:59:04.858346Z INFO ExtHandler ExtHandler Fetch goal state completed Nov 1 00:59:04.873292 waagent[1594]: 2025-11-01T00:59:04.873207Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 57ed8f63-e1b5-4d8f-9a80-191fc14f5e8c New eTag: 8251343524866198395] Nov 1 00:59:04.873880 waagent[1594]: 2025-11-01T00:59:04.873819Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Nov 1 00:59:04.949288 waagent[1594]: 2025-11-01T00:59:04.949105Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 1 00:59:04.959810 waagent[1594]: 2025-11-01T00:59:04.959709Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1594 Nov 1 00:59:04.963275 waagent[1594]: 2025-11-01T00:59:04.963184Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Nov 1 00:59:04.964484 waagent[1594]: 2025-11-01T00:59:04.964418Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 1 00:59:05.033265 waagent[1594]: 2025-11-01T00:59:05.033181Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 1 00:59:05.033700 waagent[1594]: 2025-11-01T00:59:05.033632Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 1 00:59:05.042345 waagent[1594]: 2025-11-01T00:59:05.042212Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 1 00:59:05.042826 waagent[1594]: 2025-11-01T00:59:05.042762Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Nov 1 00:59:05.043922 waagent[1594]: 2025-11-01T00:59:05.043857Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Nov 1 00:59:05.045212 waagent[1594]: 2025-11-01T00:59:05.045153Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 1 00:59:05.046212 waagent[1594]: 2025-11-01T00:59:05.046153Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 1 00:59:05.046708 waagent[1594]: 2025-11-01T00:59:05.046650Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:59:05.047213 waagent[1594]: 2025-11-01T00:59:05.047159Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:59:05.047392 waagent[1594]: 2025-11-01T00:59:05.047345Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:59:05.047860 waagent[1594]: 2025-11-01T00:59:05.047805Z INFO EnvHandler ExtHandler Configure routes Nov 1 00:59:05.048084 waagent[1594]: 2025-11-01T00:59:05.048029Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 1 00:59:05.048395 waagent[1594]: 2025-11-01T00:59:05.048342Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 1 00:59:05.048546 waagent[1594]: 2025-11-01T00:59:05.048479Z INFO EnvHandler ExtHandler Gateway:None Nov 1 00:59:05.048883 waagent[1594]: 2025-11-01T00:59:05.048837Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:59:05.049144 waagent[1594]: 2025-11-01T00:59:05.049094Z INFO EnvHandler ExtHandler Routes:None Nov 1 00:59:05.049645 waagent[1594]: 2025-11-01T00:59:05.049587Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 1 00:59:05.050347 waagent[1594]: 2025-11-01T00:59:05.050297Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 1 00:59:05.050861 waagent[1594]: 2025-11-01T00:59:05.050807Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 1 00:59:05.054140 waagent[1594]: 2025-11-01T00:59:05.053928Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 1 00:59:05.055459 waagent[1594]: 2025-11-01T00:59:05.055385Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 1 00:59:05.055459 waagent[1594]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 1 00:59:05.055459 waagent[1594]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Nov 1 00:59:05.055459 waagent[1594]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 1 00:59:05.055459 waagent[1594]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:59:05.055459 waagent[1594]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:59:05.055459 waagent[1594]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:59:05.078795 waagent[1594]: 2025-11-01T00:59:05.078720Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Nov 1 00:59:05.079550 waagent[1594]: 2025-11-01T00:59:05.079496Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Nov 1 00:59:05.080444 waagent[1594]: 2025-11-01T00:59:05.080385Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Nov 1 00:59:05.107186 waagent[1594]: 2025-11-01T00:59:05.107071Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1587' Nov 1 00:59:05.155133 waagent[1594]: 2025-11-01T00:59:05.155062Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Nov 1 00:59:05.225980 waagent[1594]: 2025-11-01T00:59:05.225854Z INFO MonitorHandler ExtHandler Network interfaces: Nov 1 00:59:05.225980 waagent[1594]: Executing ['ip', '-a', '-o', 'link']: Nov 1 00:59:05.225980 waagent[1594]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 1 00:59:05.225980 waagent[1594]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2e:89:26 brd ff:ff:ff:ff:ff:ff Nov 1 00:59:05.225980 waagent[1594]: 3: enP10017s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2e:89:26 brd ff:ff:ff:ff:ff:ff\ altname enP10017p0s2 Nov 1 00:59:05.225980 waagent[1594]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 1 00:59:05.225980 waagent[1594]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 1 00:59:05.225980 waagent[1594]: 2: eth0 inet 10.200.4.7/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 1 00:59:05.225980 waagent[1594]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 1 00:59:05.225980 waagent[1594]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Nov 1 00:59:05.225980 waagent[1594]: 2: eth0 inet6 fe80::7e1e:52ff:fe2e:8926/64 scope link \ valid_lft forever preferred_lft forever Nov 1 00:59:05.458770 waagent[1594]: 2025-11-01T00:59:05.458569Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Nov 1 00:59:05.462150 waagent[1594]: 2025-11-01T00:59:05.462027Z INFO EnvHandler ExtHandler Firewall rules: Nov 1 00:59:05.462150 waagent[1594]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:59:05.462150 waagent[1594]: pkts bytes target prot opt in out source destination Nov 1 00:59:05.462150 waagent[1594]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:59:05.462150 waagent[1594]: pkts bytes target prot opt in out source destination Nov 1 00:59:05.462150 waagent[1594]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:59:05.462150 waagent[1594]: pkts bytes target prot opt in out source destination Nov 1 00:59:05.462150 waagent[1594]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 1 00:59:05.462150 waagent[1594]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 1 00:59:05.463597 waagent[1594]: 2025-11-01T00:59:05.463540Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 1 00:59:05.503633 waagent[1594]: 2025-11-01T00:59:05.503548Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.15.0.1 -- exiting Nov 1 00:59:06.518517 waagent[1529]: 2025-11-01T00:59:06.518353Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Nov 1 00:59:06.525886 waagent[1529]: 2025-11-01T00:59:06.525800Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.15.0.1 to be the latest agent Nov 1 00:59:07.479659 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:59:07.479938 systemd[1]: Stopped kubelet.service. Nov 1 00:59:07.479994 systemd[1]: kubelet.service: Consumed 1.017s CPU time. Nov 1 00:59:07.481962 systemd[1]: Starting kubelet.service... Nov 1 00:59:07.628224 systemd[1]: Started kubelet.service. Nov 1 00:59:07.705208 kubelet[1637]: E1101 00:59:07.705160 1637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:59:07.709514 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:59:07.709687 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:59:07.715581 waagent[1630]: 2025-11-01T00:59:07.715488Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.15.0.1) Nov 1 00:59:07.716345 waagent[1630]: 2025-11-01T00:59:07.716281Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Nov 1 00:59:07.716513 waagent[1630]: 2025-11-01T00:59:07.716463Z INFO ExtHandler ExtHandler Python: 3.9.16 Nov 1 00:59:07.716674 waagent[1630]: 2025-11-01T00:59:07.716626Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Nov 1 00:59:07.732750 waagent[1630]: 2025-11-01T00:59:07.732630Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 1 00:59:07.733179 waagent[1630]: 2025-11-01T00:59:07.733117Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:59:07.733427 waagent[1630]: 2025-11-01T00:59:07.733338Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:59:07.733638 waagent[1630]: 2025-11-01T00:59:07.733586Z INFO ExtHandler ExtHandler Initializing the goal state... Nov 1 00:59:07.746215 waagent[1630]: 2025-11-01T00:59:07.746122Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 1 00:59:07.754293 waagent[1630]: 2025-11-01T00:59:07.754210Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Nov 1 00:59:07.755322 waagent[1630]: 2025-11-01T00:59:07.755260Z INFO ExtHandler Nov 1 00:59:07.755506 waagent[1630]: 2025-11-01T00:59:07.755454Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 374ca0fd-1fef-47e1-b2de-a7165879be69 eTag: 8251343524866198395 source: Fabric] Nov 1 00:59:07.756249 waagent[1630]: 2025-11-01T00:59:07.756178Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 1 00:59:07.757367 waagent[1630]: 2025-11-01T00:59:07.757305Z INFO ExtHandler Nov 1 00:59:07.757522 waagent[1630]: 2025-11-01T00:59:07.757472Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 1 00:59:07.765931 waagent[1630]: 2025-11-01T00:59:07.765862Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 1 00:59:07.766452 waagent[1630]: 2025-11-01T00:59:07.766399Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Nov 1 00:59:07.785902 waagent[1630]: 2025-11-01T00:59:07.785829Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Nov 1 00:59:07.842646 waagent[1630]: 2025-11-01T00:59:07.842514Z INFO ExtHandler Downloaded certificate {'thumbprint': '2ABC957B2D1AE22B46A27683CC78BDA9A7593AE4', 'hasPrivateKey': True} Nov 1 00:59:07.843951 waagent[1630]: 2025-11-01T00:59:07.843877Z INFO ExtHandler Fetch goal state from WireServer completed Nov 1 00:59:07.844881 waagent[1630]: 2025-11-01T00:59:07.844817Z INFO ExtHandler ExtHandler Goal state initialization completed. Nov 1 00:59:07.863931 waagent[1630]: 2025-11-01T00:59:07.863820Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Nov 1 00:59:07.872612 waagent[1630]: 2025-11-01T00:59:07.872504Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Nov 1 00:59:07.876525 waagent[1630]: 2025-11-01T00:59:07.876422Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Nov 1 00:59:07.876764 waagent[1630]: 2025-11-01T00:59:07.876709Z INFO ExtHandler ExtHandler Checking state of the firewall Nov 1 00:59:08.262055 waagent[1630]: 2025-11-01T00:59:08.261924Z WARNING ExtHandler ExtHandler The firewall rules for Azure Fabric are not setup correctly (the environment thread will fix it): The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Current state: Nov 1 00:59:08.262055 waagent[1630]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:59:08.262055 waagent[1630]: pkts bytes target prot opt in out source destination Nov 1 00:59:08.262055 waagent[1630]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:59:08.262055 waagent[1630]: pkts bytes target prot opt in out source destination Nov 1 00:59:08.262055 waagent[1630]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:59:08.262055 waagent[1630]: pkts bytes target prot opt in out source destination Nov 1 00:59:08.262055 waagent[1630]: 83 9315 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 1 00:59:08.262055 waagent[1630]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 1 00:59:08.263327 waagent[1630]: 2025-11-01T00:59:08.263252Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Nov 1 00:59:08.266217 waagent[1630]: 2025-11-01T00:59:08.266107Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Nov 1 00:59:08.266681 waagent[1630]: 2025-11-01T00:59:08.266621Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up /lib/systemd/system/waagent-network-setup.service Nov 1 00:59:08.287054 waagent[1630]: 2025-11-01T00:59:08.286925Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 1 00:59:08.296499 waagent[1630]: 2025-11-01T00:59:08.296433Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 1 00:59:08.297080 waagent[1630]: 2025-11-01T00:59:08.297013Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Nov 1 00:59:08.304660 waagent[1630]: 2025-11-01T00:59:08.304574Z INFO ExtHandler ExtHandler WALinuxAgent-2.15.0.1 running as process 1630 Nov 1 00:59:08.307887 waagent[1630]: 2025-11-01T00:59:08.307812Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Nov 1 00:59:08.308729 waagent[1630]: 2025-11-01T00:59:08.308665Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Nov 1 00:59:08.309623 waagent[1630]: 2025-11-01T00:59:08.309558Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 1 00:59:08.312180 waagent[1630]: 2025-11-01T00:59:08.312118Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Nov 1 00:59:08.312540 waagent[1630]: 2025-11-01T00:59:08.312483Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 1 00:59:08.314193 waagent[1630]: 2025-11-01T00:59:08.314133Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 1 00:59:08.314681 waagent[1630]: 2025-11-01T00:59:08.314624Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:59:08.314854 waagent[1630]: 2025-11-01T00:59:08.314805Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:59:08.315411 waagent[1630]: 2025-11-01T00:59:08.315357Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 1 00:59:08.315879 waagent[1630]: 2025-11-01T00:59:08.315824Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 1 00:59:08.316308 waagent[1630]: 2025-11-01T00:59:08.316254Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:59:08.316565 waagent[1630]: 2025-11-01T00:59:08.316514Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 1 00:59:08.316565 waagent[1630]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 1 00:59:08.316565 waagent[1630]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Nov 1 00:59:08.316565 waagent[1630]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 1 00:59:08.316565 waagent[1630]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:59:08.316565 waagent[1630]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:59:08.316565 waagent[1630]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:59:08.317061 waagent[1630]: 2025-11-01T00:59:08.317008Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 1 00:59:08.317394 waagent[1630]: 2025-11-01T00:59:08.317343Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:59:08.317491 waagent[1630]: 2025-11-01T00:59:08.317432Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 1 00:59:08.318462 waagent[1630]: 2025-11-01T00:59:08.318408Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 1 00:59:08.318724 waagent[1630]: 2025-11-01T00:59:08.318673Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 1 00:59:08.319070 waagent[1630]: 2025-11-01T00:59:08.319015Z INFO EnvHandler ExtHandler Configure routes Nov 1 00:59:08.322060 waagent[1630]: 2025-11-01T00:59:08.321954Z INFO EnvHandler ExtHandler Gateway:None Nov 1 00:59:08.322387 waagent[1630]: 2025-11-01T00:59:08.322329Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 1 00:59:08.324741 waagent[1630]: 2025-11-01T00:59:08.324690Z INFO EnvHandler ExtHandler Routes:None Nov 1 00:59:08.349163 waagent[1630]: 2025-11-01T00:59:08.349078Z INFO ExtHandler ExtHandler Downloading agent manifest Nov 1 00:59:08.350784 waagent[1630]: 2025-11-01T00:59:08.350725Z INFO MonitorHandler ExtHandler Network interfaces: Nov 1 00:59:08.350784 waagent[1630]: Executing ['ip', '-a', '-o', 'link']: Nov 1 00:59:08.350784 waagent[1630]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 1 00:59:08.350784 waagent[1630]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2e:89:26 brd ff:ff:ff:ff:ff:ff Nov 1 00:59:08.350784 waagent[1630]: 3: enP10017s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2e:89:26 brd ff:ff:ff:ff:ff:ff\ altname enP10017p0s2 Nov 1 00:59:08.350784 waagent[1630]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 1 00:59:08.350784 waagent[1630]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 1 00:59:08.350784 waagent[1630]: 2: eth0 inet 10.200.4.7/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 1 00:59:08.350784 waagent[1630]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 1 00:59:08.350784 waagent[1630]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Nov 1 00:59:08.350784 waagent[1630]: 2: eth0 inet6 fe80::7e1e:52ff:fe2e:8926/64 scope link \ valid_lft forever preferred_lft forever Nov 1 00:59:08.372358 waagent[1630]: 2025-11-01T00:59:08.372274Z INFO ExtHandler ExtHandler Nov 1 00:59:08.373595 waagent[1630]: 2025-11-01T00:59:08.373521Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: bcf0b17b-a649-45c9-b1e0-7b9a82f39243 correlation e6cf6fab-6515-400a-8c50-6ac079f24e72 created: 2025-11-01T00:57:49.851509Z] Nov 1 00:59:08.376095 waagent[1630]: 2025-11-01T00:59:08.375852Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 1 00:59:08.383095 waagent[1630]: 2025-11-01T00:59:08.383023Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Nov 1 00:59:08.385401 waagent[1630]: 2025-11-01T00:59:08.385331Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 13 ms] Nov 1 00:59:08.417418 waagent[1630]: 2025-11-01T00:59:08.417341Z WARNING EnvHandler ExtHandler The firewall is not configured correctly. The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Will reset it. Current state: Nov 1 00:59:08.417418 waagent[1630]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:59:08.417418 waagent[1630]: pkts bytes target prot opt in out source destination Nov 1 00:59:08.417418 waagent[1630]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:59:08.417418 waagent[1630]: pkts bytes target prot opt in out source destination Nov 1 00:59:08.417418 waagent[1630]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:59:08.417418 waagent[1630]: pkts bytes target prot opt in out source destination Nov 1 00:59:08.417418 waagent[1630]: 120 14610 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 1 00:59:08.417418 waagent[1630]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 1 00:59:08.432623 waagent[1630]: 2025-11-01T00:59:08.432545Z INFO ExtHandler ExtHandler Looking for existing remote access users. Nov 1 00:59:08.438136 waagent[1630]: 2025-11-01T00:59:08.437991Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.15.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 97E29B6E-D5C8-4164-AE8C-9485A2E8A0C3;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Nov 1 00:59:08.477546 waagent[1630]: 2025-11-01T00:59:08.477430Z INFO EnvHandler ExtHandler The firewall was setup successfully: Nov 1 00:59:08.477546 waagent[1630]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:59:08.477546 waagent[1630]: pkts bytes target prot opt in out source destination Nov 1 00:59:08.477546 waagent[1630]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:59:08.477546 waagent[1630]: pkts bytes target prot opt in out source destination Nov 1 00:59:08.477546 waagent[1630]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:59:08.477546 waagent[1630]: pkts bytes target prot opt in out source destination Nov 1 00:59:08.477546 waagent[1630]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 1 00:59:08.477546 waagent[1630]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 1 00:59:08.477546 waagent[1630]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 1 00:59:17.729630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:59:17.729939 systemd[1]: Stopped kubelet.service. Nov 1 00:59:17.732035 systemd[1]: Starting kubelet.service... Nov 1 00:59:17.827865 systemd[1]: Started kubelet.service. Nov 1 00:59:18.514496 kubelet[1690]: E1101 00:59:18.514441 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:59:18.516362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:59:18.516527 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:59:28.729660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 00:59:28.729994 systemd[1]: Stopped kubelet.service. Nov 1 00:59:28.732097 systemd[1]: Starting kubelet.service... Nov 1 00:59:28.829613 systemd[1]: Started kubelet.service. Nov 1 00:59:28.873117 kubelet[1699]: E1101 00:59:28.873078 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:59:28.874928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:59:28.875097 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:59:34.320390 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 1 00:59:36.356202 systemd[1]: Created slice system-sshd.slice. Nov 1 00:59:36.358134 systemd[1]: Started sshd@0-10.200.4.7:22-10.200.16.10:43224.service. Nov 1 00:59:37.194142 sshd[1706]: Accepted publickey for core from 10.200.16.10 port 43224 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 00:59:37.195866 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:59:37.201543 systemd[1]: Started session-3.scope. Nov 1 00:59:37.202007 systemd-logind[1430]: New session 3 of user core. Nov 1 00:59:37.721332 systemd[1]: Started sshd@1-10.200.4.7:22-10.200.16.10:43234.service. Nov 1 00:59:38.319948 sshd[1711]: Accepted publickey for core from 10.200.16.10 port 43234 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 00:59:38.321529 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:59:38.325804 systemd-logind[1430]: New session 4 of user core. Nov 1 00:59:38.326413 systemd[1]: Started session-4.scope. Nov 1 00:59:38.746104 sshd[1711]: pam_unix(sshd:session): session closed for user core Nov 1 00:59:38.749515 systemd[1]: sshd@1-10.200.4.7:22-10.200.16.10:43234.service: Deactivated successfully. Nov 1 00:59:38.750401 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:59:38.751006 systemd-logind[1430]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:59:38.751779 systemd-logind[1430]: Removed session 4. Nov 1 00:59:38.845544 systemd[1]: Started sshd@2-10.200.4.7:22-10.200.16.10:43242.service. Nov 1 00:59:38.979601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 1 00:59:38.979936 systemd[1]: Stopped kubelet.service. Nov 1 00:59:38.982013 systemd[1]: Starting kubelet.service... Nov 1 00:59:39.080870 systemd[1]: Started kubelet.service. Nov 1 00:59:39.800189 sshd[1717]: Accepted publickey for core from 10.200.16.10 port 43242 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 00:59:39.761432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:59:39.800823 kubelet[1723]: E1101 00:59:39.759987 1723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:59:39.761552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:59:39.803580 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:59:39.807938 systemd-logind[1430]: New session 5 of user core. Nov 1 00:59:39.808589 systemd[1]: Started session-5.scope. Nov 1 00:59:40.154147 sshd[1717]: pam_unix(sshd:session): session closed for user core Nov 1 00:59:40.157891 systemd[1]: sshd@2-10.200.4.7:22-10.200.16.10:43242.service: Deactivated successfully. Nov 1 00:59:40.158899 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:59:40.159692 systemd-logind[1430]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:59:40.160680 systemd-logind[1430]: Removed session 5. Nov 1 00:59:40.254221 systemd[1]: Started sshd@3-10.200.4.7:22-10.200.16.10:44392.service. Nov 1 00:59:40.845604 sshd[1732]: Accepted publickey for core from 10.200.16.10 port 44392 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 00:59:40.847150 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:59:40.852546 systemd[1]: Started session-6.scope. Nov 1 00:59:40.853348 systemd-logind[1430]: New session 6 of user core. Nov 1 00:59:41.279622 sshd[1732]: pam_unix(sshd:session): session closed for user core Nov 1 00:59:41.282824 systemd[1]: sshd@3-10.200.4.7:22-10.200.16.10:44392.service: Deactivated successfully. Nov 1 00:59:41.283708 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:59:41.284354 systemd-logind[1430]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:59:41.285093 systemd-logind[1430]: Removed session 6. Nov 1 00:59:41.380823 systemd[1]: Started sshd@4-10.200.4.7:22-10.200.16.10:44400.service. Nov 1 00:59:41.568695 update_engine[1431]: I1101 00:59:41.567966 1431 update_attempter.cc:509] Updating boot flags... Nov 1 00:59:41.978915 sshd[1738]: Accepted publickey for core from 10.200.16.10 port 44400 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 00:59:41.980137 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:59:41.985647 systemd[1]: Started session-7.scope. Nov 1 00:59:41.986567 systemd-logind[1430]: New session 7 of user core. Nov 1 00:59:42.528721 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:59:42.529037 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:59:42.554293 systemd[1]: Starting docker.service... Nov 1 00:59:42.591261 env[1817]: time="2025-11-01T00:59:42.591205696Z" level=info msg="Starting up" Nov 1 00:59:42.592710 env[1817]: time="2025-11-01T00:59:42.592681197Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:59:42.592710 env[1817]: time="2025-11-01T00:59:42.592700297Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:59:42.592891 env[1817]: time="2025-11-01T00:59:42.592722997Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:59:42.592891 env[1817]: time="2025-11-01T00:59:42.592737597Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:59:42.594883 env[1817]: time="2025-11-01T00:59:42.594852098Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:59:42.594883 env[1817]: time="2025-11-01T00:59:42.594870398Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:59:42.595040 env[1817]: time="2025-11-01T00:59:42.594891099Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:59:42.595040 env[1817]: time="2025-11-01T00:59:42.594903599Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:59:42.602090 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2574890673-merged.mount: Deactivated successfully. Nov 1 00:59:42.707431 env[1817]: time="2025-11-01T00:59:42.707377275Z" level=info msg="Loading containers: start." Nov 1 00:59:42.846270 kernel: Initializing XFRM netlink socket Nov 1 00:59:42.905686 env[1817]: time="2025-11-01T00:59:42.905639411Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:59:43.017498 systemd-networkd[1587]: docker0: Link UP Nov 1 00:59:43.039770 env[1817]: time="2025-11-01T00:59:43.039726001Z" level=info msg="Loading containers: done." Nov 1 00:59:43.057886 env[1817]: time="2025-11-01T00:59:43.057828212Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:59:43.058162 env[1817]: time="2025-11-01T00:59:43.058134313Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:59:43.058313 env[1817]: time="2025-11-01T00:59:43.058293213Z" level=info msg="Daemon has completed initialization" Nov 1 00:59:43.099392 systemd[1]: Started docker.service. Nov 1 00:59:43.111508 env[1817]: time="2025-11-01T00:59:43.111436747Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:59:44.364500 env[1442]: time="2025-11-01T00:59:44.364435235Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 1 00:59:45.228312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3093729562.mount: Deactivated successfully. Nov 1 00:59:47.111768 env[1442]: time="2025-11-01T00:59:47.111711463Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:59:47.117968 env[1442]: time="2025-11-01T00:59:47.117917866Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:59:47.122948 env[1442]: time="2025-11-01T00:59:47.122903569Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:59:47.127098 env[1442]: time="2025-11-01T00:59:47.127063271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:59:47.127800 env[1442]: time="2025-11-01T00:59:47.127766871Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 1 00:59:47.128518 env[1442]: time="2025-11-01T00:59:47.128494771Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 1 00:59:49.979398 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 1 00:59:49.979646 systemd[1]: Stopped kubelet.service. Nov 1 00:59:49.981481 systemd[1]: Starting kubelet.service... Nov 1 00:59:50.088108 systemd[1]: Started kubelet.service. Nov 1 00:59:50.127116 kubelet[1936]: E1101 00:59:50.127076 1936 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:59:50.128801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:59:50.128964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:59:59.051388 env[1442]: time="2025-11-01T00:59:59.051330634Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:59:59.057256 env[1442]: time="2025-11-01T00:59:59.057189429Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:59:59.061588 env[1442]: time="2025-11-01T00:59:59.061547774Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:59:59.066003 env[1442]: time="2025-11-01T00:59:59.065965620Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:59:59.066631 env[1442]: time="2025-11-01T00:59:59.066597341Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 1 00:59:59.067376 env[1442]: time="2025-11-01T00:59:59.067347266Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 1 01:00:00.229678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 1 01:00:00.229992 systemd[1]: Stopped kubelet.service. Nov 1 01:00:00.232180 systemd[1]: Starting kubelet.service... Nov 1 01:00:00.332462 systemd[1]: Started kubelet.service. Nov 1 01:00:00.375331 kubelet[1946]: E1101 01:00:00.375284 1946 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:00:00.377044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:00:00.377168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:00:01.691604 env[1442]: time="2025-11-01T01:00:01.691546326Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:01.700395 env[1442]: time="2025-11-01T01:00:01.700345703Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:01.704171 env[1442]: time="2025-11-01T01:00:01.704128422Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:01.708208 env[1442]: time="2025-11-01T01:00:01.708160048Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:01.708900 env[1442]: time="2025-11-01T01:00:01.708863071Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 1 01:00:01.709703 env[1442]: time="2025-11-01T01:00:01.709673696Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 1 01:00:08.328026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1658384626.mount: Deactivated successfully. Nov 1 01:00:08.838170 env[1442]: time="2025-11-01T01:00:08.838108971Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:08.850952 env[1442]: time="2025-11-01T01:00:08.850898904Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:08.857316 env[1442]: time="2025-11-01T01:00:08.857266470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:08.870914 env[1442]: time="2025-11-01T01:00:08.870863024Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:08.871341 env[1442]: time="2025-11-01T01:00:08.871304635Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 1 01:00:08.872079 env[1442]: time="2025-11-01T01:00:08.872047655Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 1 01:00:09.580047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1198342746.mount: Deactivated successfully. Nov 1 01:00:10.479583 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Nov 1 01:00:10.479904 systemd[1]: Stopped kubelet.service. Nov 1 01:00:10.481990 systemd[1]: Starting kubelet.service... Nov 1 01:00:10.641055 systemd[1]: Started kubelet.service. Nov 1 01:00:10.685442 kubelet[1955]: E1101 01:00:10.685395 1955 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:00:10.687283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:00:10.687453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:00:12.090172 env[1442]: time="2025-11-01T01:00:12.090104892Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:12.101067 env[1442]: time="2025-11-01T01:00:12.101014548Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:12.105406 env[1442]: time="2025-11-01T01:00:12.105361849Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:12.109338 env[1442]: time="2025-11-01T01:00:12.109298142Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:12.110058 env[1442]: time="2025-11-01T01:00:12.110027759Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 1 01:00:12.110649 env[1442]: time="2025-11-01T01:00:12.110622973Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 1 01:00:12.777738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651396429.mount: Deactivated successfully. Nov 1 01:00:12.818462 env[1442]: time="2025-11-01T01:00:12.818405553Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:12.831761 env[1442]: time="2025-11-01T01:00:12.831672564Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:12.836759 env[1442]: time="2025-11-01T01:00:12.836700982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:12.841686 env[1442]: time="2025-11-01T01:00:12.841622597Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:12.842139 env[1442]: time="2025-11-01T01:00:12.842101608Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 1 01:00:12.843048 env[1442]: time="2025-11-01T01:00:12.842962429Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 1 01:00:20.729524 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Nov 1 01:00:20.729839 systemd[1]: Stopped kubelet.service. Nov 1 01:00:20.731654 systemd[1]: Starting kubelet.service... Nov 1 01:00:20.835853 systemd[1]: Started kubelet.service. Nov 1 01:00:20.885554 kubelet[1965]: E1101 01:00:20.885499 1965 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:00:20.887279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:00:20.887447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:00:22.247900 env[1442]: time="2025-11-01T01:00:22.247841825Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.6.4-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:22.257786 env[1442]: time="2025-11-01T01:00:22.257733406Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:22.262755 env[1442]: time="2025-11-01T01:00:22.262713096Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.6.4-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:22.266598 env[1442]: time="2025-11-01T01:00:22.266558266Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:22.267279 env[1442]: time="2025-11-01T01:00:22.267223278Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 1 01:00:25.548722 systemd[1]: Stopped kubelet.service. Nov 1 01:00:25.551622 systemd[1]: Starting kubelet.service... Nov 1 01:00:25.580313 systemd[1]: Reloading. Nov 1 01:00:25.683919 /usr/lib/systemd/system-generators/torcx-generator[2020]: time="2025-11-01T01:00:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 01:00:25.699470 /usr/lib/systemd/system-generators/torcx-generator[2020]: time="2025-11-01T01:00:25Z" level=info msg="torcx already run" Nov 1 01:00:25.764414 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 01:00:25.764436 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 01:00:25.780966 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:00:25.996483 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 01:00:25.996613 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 01:00:25.996942 systemd[1]: Stopped kubelet.service. Nov 1 01:00:25.999524 systemd[1]: Starting kubelet.service... Nov 1 01:00:27.020193 systemd[1]: Started kubelet.service. Nov 1 01:00:27.059797 kubelet[2081]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:00:27.059797 kubelet[2081]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:00:27.060315 kubelet[2081]: I1101 01:00:27.059872 2081 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:00:28.054595 kubelet[2081]: I1101 01:00:28.054552 2081 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 01:00:28.054595 kubelet[2081]: I1101 01:00:28.054580 2081 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:00:28.054595 kubelet[2081]: I1101 01:00:28.054604 2081 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 01:00:28.054951 kubelet[2081]: I1101 01:00:28.054615 2081 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:00:28.055033 kubelet[2081]: I1101 01:00:28.055009 2081 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 01:00:28.098300 kubelet[2081]: E1101 01:00:28.098223 2081 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 01:00:28.099450 kubelet[2081]: I1101 01:00:28.099421 2081 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:00:28.104564 kubelet[2081]: E1101 01:00:28.104525 2081 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:00:28.104701 kubelet[2081]: I1101 01:00:28.104595 2081 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 01:00:28.109958 kubelet[2081]: I1101 01:00:28.109934 2081 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 01:00:28.110916 kubelet[2081]: I1101 01:00:28.110876 2081 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:00:28.111121 kubelet[2081]: I1101 01:00:28.110914 2081 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-e458e05b0a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 01:00:28.111293 kubelet[2081]: I1101 01:00:28.111128 2081 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:00:28.111293 kubelet[2081]: I1101 01:00:28.111142 2081 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 01:00:28.111293 kubelet[2081]: I1101 01:00:28.111288 2081 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 01:00:28.117344 kubelet[2081]: I1101 01:00:28.117312 2081 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:00:28.118762 kubelet[2081]: I1101 01:00:28.118740 2081 kubelet.go:475] "Attempting to sync node with API server" Nov 1 01:00:28.118868 kubelet[2081]: I1101 01:00:28.118774 2081 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:00:28.118868 kubelet[2081]: I1101 01:00:28.118806 2081 kubelet.go:387] "Adding apiserver pod source" Nov 1 01:00:28.118868 kubelet[2081]: I1101 01:00:28.118824 2081 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:00:28.121510 kubelet[2081]: I1101 01:00:28.121490 2081 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 01:00:28.122153 kubelet[2081]: I1101 01:00:28.122131 2081 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 01:00:28.122232 kubelet[2081]: I1101 01:00:28.122174 2081 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 01:00:28.122232 kubelet[2081]: W1101 01:00:28.122225 2081 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 01:00:28.124891 kubelet[2081]: I1101 01:00:28.124869 2081 server.go:1262] "Started kubelet" Nov 1 01:00:28.125816 kubelet[2081]: E1101 01:00:28.125099 2081 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 01:00:28.125816 kubelet[2081]: E1101 01:00:28.125226 2081 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-e458e05b0a&limit=500&resourceVersion=0\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 01:00:28.131668 kubelet[2081]: I1101 01:00:28.131627 2081 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:00:28.134859 kubelet[2081]: I1101 01:00:28.134819 2081 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:00:28.134957 kubelet[2081]: I1101 01:00:28.134872 2081 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 01:00:28.135352 kubelet[2081]: I1101 01:00:28.135229 2081 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:00:28.136635 kubelet[2081]: I1101 01:00:28.136615 2081 server.go:310] "Adding debug handlers to kubelet server" Nov 1 01:00:28.142996 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Nov 1 01:00:28.143244 kubelet[2081]: I1101 01:00:28.143222 2081 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:00:28.143789 kubelet[2081]: E1101 01:00:28.138875 2081 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.7:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.7:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-e458e05b0a.1873bc380b3ea0df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-e458e05b0a,UID:ci-3510.3.8-n-e458e05b0a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-e458e05b0a,},FirstTimestamp:2025-11-01 01:00:28.124840159 +0000 UTC m=+1.100626441,LastTimestamp:2025-11-01 01:00:28.124840159 +0000 UTC m=+1.100626441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-e458e05b0a,}" Nov 1 01:00:28.146176 kubelet[2081]: E1101 01:00:28.146150 2081 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:00:28.146461 kubelet[2081]: I1101 01:00:28.146440 2081 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:00:28.150339 kubelet[2081]: E1101 01:00:28.150307 2081 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-e458e05b0a\" not found" Nov 1 01:00:28.150432 kubelet[2081]: I1101 01:00:28.150362 2081 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 01:00:28.150775 kubelet[2081]: I1101 01:00:28.150758 2081 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 01:00:28.150849 kubelet[2081]: I1101 01:00:28.150815 2081 reconciler.go:29] "Reconciler: start to sync state" Nov 1 01:00:28.151321 kubelet[2081]: E1101 01:00:28.151294 2081 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 01:00:28.151536 kubelet[2081]: I1101 01:00:28.151513 2081 factory.go:223] Registration of the systemd container factory successfully Nov 1 01:00:28.151629 kubelet[2081]: I1101 01:00:28.151610 2081 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:00:28.152973 kubelet[2081]: I1101 01:00:28.152950 2081 factory.go:223] Registration of the containerd container factory successfully Nov 1 01:00:28.158338 kubelet[2081]: E1101 01:00:28.158211 2081 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-e458e05b0a?timeout=10s\": dial tcp 10.200.4.7:6443: connect: connection refused" interval="200ms" Nov 1 01:00:28.205668 kubelet[2081]: I1101 01:00:28.204823 2081 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:00:28.205668 kubelet[2081]: I1101 01:00:28.204847 2081 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:00:28.205668 kubelet[2081]: I1101 01:00:28.204866 2081 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:00:28.210352 kubelet[2081]: I1101 01:00:28.210263 2081 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 01:00:28.210828 kubelet[2081]: I1101 01:00:28.210807 2081 policy_none.go:49] "None policy: Start" Nov 1 01:00:28.210927 kubelet[2081]: I1101 01:00:28.210833 2081 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 01:00:28.210927 kubelet[2081]: I1101 01:00:28.210847 2081 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 01:00:28.212642 kubelet[2081]: I1101 01:00:28.212622 2081 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 01:00:28.212642 kubelet[2081]: I1101 01:00:28.212643 2081 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 01:00:28.214066 kubelet[2081]: I1101 01:00:28.212672 2081 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 01:00:28.214066 kubelet[2081]: E1101 01:00:28.212731 2081 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:00:28.214781 kubelet[2081]: E1101 01:00:28.214752 2081 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 01:00:28.215946 kubelet[2081]: I1101 01:00:28.215926 2081 policy_none.go:47] "Start" Nov 1 01:00:28.220438 systemd[1]: Created slice kubepods.slice. Nov 1 01:00:28.225054 systemd[1]: Created slice kubepods-burstable.slice. Nov 1 01:00:28.227965 systemd[1]: Created slice kubepods-besteffort.slice. Nov 1 01:00:28.232964 kubelet[2081]: E1101 01:00:28.232940 2081 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 01:00:28.233533 kubelet[2081]: I1101 01:00:28.233517 2081 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:00:28.233678 kubelet[2081]: I1101 01:00:28.233636 2081 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:00:28.234229 kubelet[2081]: I1101 01:00:28.234211 2081 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:00:28.235603 kubelet[2081]: E1101 01:00:28.235583 2081 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:00:28.235873 kubelet[2081]: E1101 01:00:28.235855 2081 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-e458e05b0a\" not found" Nov 1 01:00:28.327659 systemd[1]: Created slice kubepods-burstable-pod5a1bf57e3da5fe5f81c041e01fac961d.slice. Nov 1 01:00:28.338507 kubelet[2081]: I1101 01:00:28.338471 2081 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.338905 kubelet[2081]: E1101 01:00:28.338877 2081 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-e458e05b0a\" not found" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.339670 kubelet[2081]: E1101 01:00:28.339568 2081 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.7:6443/api/v1/nodes\": dial tcp 10.200.4.7:6443: connect: connection refused" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.343646 systemd[1]: Created slice kubepods-burstable-pod2390a5efe54abfa2a84558824f18b7e7.slice. Nov 1 01:00:28.345394 kubelet[2081]: E1101 01:00:28.345373 2081 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-e458e05b0a\" not found" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.348016 systemd[1]: Created slice kubepods-burstable-pod23ac912251be65e31e473f7a659fdca9.slice. Nov 1 01:00:28.349696 kubelet[2081]: E1101 01:00:28.349678 2081 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-e458e05b0a\" not found" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.359373 kubelet[2081]: E1101 01:00:28.359345 2081 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-e458e05b0a?timeout=10s\": dial tcp 10.200.4.7:6443: connect: connection refused" interval="400ms" Nov 1 01:00:28.452825 kubelet[2081]: I1101 01:00:28.452758 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a1bf57e3da5fe5f81c041e01fac961d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-e458e05b0a\" (UID: \"5a1bf57e3da5fe5f81c041e01fac961d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.452825 kubelet[2081]: I1101 01:00:28.452818 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2390a5efe54abfa2a84558824f18b7e7-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-e458e05b0a\" (UID: \"2390a5efe54abfa2a84558824f18b7e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.453111 kubelet[2081]: I1101 01:00:28.452847 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2390a5efe54abfa2a84558824f18b7e7-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-e458e05b0a\" (UID: \"2390a5efe54abfa2a84558824f18b7e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.453111 kubelet[2081]: I1101 01:00:28.452871 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2390a5efe54abfa2a84558824f18b7e7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-e458e05b0a\" (UID: \"2390a5efe54abfa2a84558824f18b7e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.453111 kubelet[2081]: I1101 01:00:28.452899 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23ac912251be65e31e473f7a659fdca9-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-e458e05b0a\" (UID: \"23ac912251be65e31e473f7a659fdca9\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.453111 kubelet[2081]: I1101 01:00:28.452922 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a1bf57e3da5fe5f81c041e01fac961d-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-e458e05b0a\" (UID: \"5a1bf57e3da5fe5f81c041e01fac961d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.453111 kubelet[2081]: I1101 01:00:28.452947 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a1bf57e3da5fe5f81c041e01fac961d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-e458e05b0a\" (UID: \"5a1bf57e3da5fe5f81c041e01fac961d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.453385 kubelet[2081]: I1101 01:00:28.452973 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2390a5efe54abfa2a84558824f18b7e7-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-e458e05b0a\" (UID: \"2390a5efe54abfa2a84558824f18b7e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.453385 kubelet[2081]: I1101 01:00:28.453007 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2390a5efe54abfa2a84558824f18b7e7-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-e458e05b0a\" (UID: \"2390a5efe54abfa2a84558824f18b7e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.541983 kubelet[2081]: I1101 01:00:28.541949 2081 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.542453 kubelet[2081]: E1101 01:00:28.542417 2081 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.7:6443/api/v1/nodes\": dial tcp 10.200.4.7:6443: connect: connection refused" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.646109 env[1442]: time="2025-11-01T01:00:28.645986886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-e458e05b0a,Uid:5a1bf57e3da5fe5f81c041e01fac961d,Namespace:kube-system,Attempt:0,}" Nov 1 01:00:28.651785 env[1442]: time="2025-11-01T01:00:28.651744877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-e458e05b0a,Uid:2390a5efe54abfa2a84558824f18b7e7,Namespace:kube-system,Attempt:0,}" Nov 1 01:00:28.655838 env[1442]: time="2025-11-01T01:00:28.655798341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-e458e05b0a,Uid:23ac912251be65e31e473f7a659fdca9,Namespace:kube-system,Attempt:0,}" Nov 1 01:00:28.760748 kubelet[2081]: E1101 01:00:28.760700 2081 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-e458e05b0a?timeout=10s\": dial tcp 10.200.4.7:6443: connect: connection refused" interval="800ms" Nov 1 01:00:28.944106 kubelet[2081]: I1101 01:00:28.944004 2081 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:28.944640 kubelet[2081]: E1101 01:00:28.944604 2081 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.7:6443/api/v1/nodes\": dial tcp 10.200.4.7:6443: connect: connection refused" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:29.153516 kubelet[2081]: E1101 01:00:29.153475 2081 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 01:00:29.333667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3735326572.mount: Deactivated successfully. Nov 1 01:00:29.345318 kubelet[2081]: E1101 01:00:29.345279 2081 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-e458e05b0a&limit=500&resourceVersion=0\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 01:00:29.367860 env[1442]: time="2025-11-01T01:00:29.367812049Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:29.371180 env[1442]: time="2025-11-01T01:00:29.371138800Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:29.382689 env[1442]: time="2025-11-01T01:00:29.382639078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:29.385862 env[1442]: time="2025-11-01T01:00:29.385820027Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:29.388523 env[1442]: time="2025-11-01T01:00:29.388489168Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:29.392295 kubelet[2081]: E1101 01:00:29.392257 2081 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 01:00:29.392771 env[1442]: time="2025-11-01T01:00:29.392738034Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:29.395439 env[1442]: time="2025-11-01T01:00:29.395406875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:29.397897 env[1442]: time="2025-11-01T01:00:29.397863113Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:29.408563 env[1442]: time="2025-11-01T01:00:29.408519277Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:29.411308 env[1442]: time="2025-11-01T01:00:29.411272520Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:29.420126 env[1442]: time="2025-11-01T01:00:29.420080955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:29.439835 env[1442]: time="2025-11-01T01:00:29.439779659Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:29.562148 kubelet[2081]: E1101 01:00:29.562096 2081 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-e458e05b0a?timeout=10s\": dial tcp 10.200.4.7:6443: connect: connection refused" interval="1.6s" Nov 1 01:00:29.571859 env[1442]: time="2025-11-01T01:00:29.571771295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:00:29.572013 env[1442]: time="2025-11-01T01:00:29.571865597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:00:29.572013 env[1442]: time="2025-11-01T01:00:29.571892797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:00:29.572115 env[1442]: time="2025-11-01T01:00:29.572038700Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3dcd84ca614fffc9432cca4a6f63c961517346f84500e1bef8f7c28d13db81a pid=2125 runtime=io.containerd.runc.v2 Nov 1 01:00:29.583058 env[1442]: time="2025-11-01T01:00:29.582985568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:00:29.583058 env[1442]: time="2025-11-01T01:00:29.583027269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:00:29.583279 env[1442]: time="2025-11-01T01:00:29.583041469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:00:29.583567 env[1442]: time="2025-11-01T01:00:29.583515577Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb4e639c74d6384df78ff737025a667eba3720d39e6ea4f8444b50889461a5f6 pid=2140 runtime=io.containerd.runc.v2 Nov 1 01:00:29.595938 systemd[1]: Started cri-containerd-b3dcd84ca614fffc9432cca4a6f63c961517346f84500e1bef8f7c28d13db81a.scope. Nov 1 01:00:29.603942 env[1442]: time="2025-11-01T01:00:29.602145264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:00:29.603942 env[1442]: time="2025-11-01T01:00:29.602259466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:00:29.603942 env[1442]: time="2025-11-01T01:00:29.602289866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:00:29.603942 env[1442]: time="2025-11-01T01:00:29.602434269Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a093a6be970a341508f4b8fd9f2f576c09b26c8dd04b9d51aad725817c7c5352 pid=2172 runtime=io.containerd.runc.v2 Nov 1 01:00:29.626086 systemd[1]: Started cri-containerd-bb4e639c74d6384df78ff737025a667eba3720d39e6ea4f8444b50889461a5f6.scope. Nov 1 01:00:29.639940 systemd[1]: Started cri-containerd-a093a6be970a341508f4b8fd9f2f576c09b26c8dd04b9d51aad725817c7c5352.scope. Nov 1 01:00:29.701043 env[1442]: time="2025-11-01T01:00:29.700995789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-e458e05b0a,Uid:23ac912251be65e31e473f7a659fdca9,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb4e639c74d6384df78ff737025a667eba3720d39e6ea4f8444b50889461a5f6\"" Nov 1 01:00:29.715031 env[1442]: time="2025-11-01T01:00:29.714975605Z" level=info msg="CreateContainer within sandbox \"bb4e639c74d6384df78ff737025a667eba3720d39e6ea4f8444b50889461a5f6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 01:00:29.723113 env[1442]: time="2025-11-01T01:00:29.722283017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-e458e05b0a,Uid:2390a5efe54abfa2a84558824f18b7e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"a093a6be970a341508f4b8fd9f2f576c09b26c8dd04b9d51aad725817c7c5352\"" Nov 1 01:00:29.725548 env[1442]: time="2025-11-01T01:00:29.725506967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-e458e05b0a,Uid:5a1bf57e3da5fe5f81c041e01fac961d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3dcd84ca614fffc9432cca4a6f63c961517346f84500e1bef8f7c28d13db81a\"" Nov 1 01:00:29.733332 env[1442]: time="2025-11-01T01:00:29.733285687Z" level=info msg="CreateContainer within sandbox \"a093a6be970a341508f4b8fd9f2f576c09b26c8dd04b9d51aad725817c7c5352\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 01:00:29.738811 env[1442]: time="2025-11-01T01:00:29.738772872Z" level=info msg="CreateContainer within sandbox \"b3dcd84ca614fffc9432cca4a6f63c961517346f84500e1bef8f7c28d13db81a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 01:00:29.741902 kubelet[2081]: E1101 01:00:29.741858 2081 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 01:00:29.746803 kubelet[2081]: I1101 01:00:29.746509 2081 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:29.746966 kubelet[2081]: E1101 01:00:29.746866 2081 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.7:6443/api/v1/nodes\": dial tcp 10.200.4.7:6443: connect: connection refused" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:29.754911 env[1442]: time="2025-11-01T01:00:29.754859420Z" level=info msg="CreateContainer within sandbox \"bb4e639c74d6384df78ff737025a667eba3720d39e6ea4f8444b50889461a5f6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b1616164a3e102734e3fd14afb169e1fd22b54e03e14b865d5626be5e749133e\"" Nov 1 01:00:29.755641 env[1442]: time="2025-11-01T01:00:29.755606331Z" level=info msg="StartContainer for \"b1616164a3e102734e3fd14afb169e1fd22b54e03e14b865d5626be5e749133e\"" Nov 1 01:00:29.776347 systemd[1]: Started cri-containerd-b1616164a3e102734e3fd14afb169e1fd22b54e03e14b865d5626be5e749133e.scope. Nov 1 01:00:29.798409 env[1442]: time="2025-11-01T01:00:29.798355891Z" level=info msg="CreateContainer within sandbox \"a093a6be970a341508f4b8fd9f2f576c09b26c8dd04b9d51aad725817c7c5352\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"127b14eaf3097b68191db8822fedf2af9ad50322960b0ea06ab04f5199d7d588\"" Nov 1 01:00:29.799189 env[1442]: time="2025-11-01T01:00:29.799155303Z" level=info msg="StartContainer for \"127b14eaf3097b68191db8822fedf2af9ad50322960b0ea06ab04f5199d7d588\"" Nov 1 01:00:29.810435 env[1442]: time="2025-11-01T01:00:29.806897023Z" level=info msg="CreateContainer within sandbox \"b3dcd84ca614fffc9432cca4a6f63c961517346f84500e1bef8f7c28d13db81a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dc12a97394bbea0186c88c55f62007bd400f399ee521a43a98ad709e297f7ebd\"" Nov 1 01:00:29.810435 env[1442]: time="2025-11-01T01:00:29.807516932Z" level=info msg="StartContainer for \"dc12a97394bbea0186c88c55f62007bd400f399ee521a43a98ad709e297f7ebd\"" Nov 1 01:00:29.841060 env[1442]: time="2025-11-01T01:00:29.841004049Z" level=info msg="StartContainer for \"b1616164a3e102734e3fd14afb169e1fd22b54e03e14b865d5626be5e749133e\" returns successfully" Nov 1 01:00:29.845549 systemd[1]: Started cri-containerd-127b14eaf3097b68191db8822fedf2af9ad50322960b0ea06ab04f5199d7d588.scope. Nov 1 01:00:29.858699 systemd[1]: Started cri-containerd-dc12a97394bbea0186c88c55f62007bd400f399ee521a43a98ad709e297f7ebd.scope. Nov 1 01:00:29.949417 env[1442]: time="2025-11-01T01:00:29.949368220Z" level=info msg="StartContainer for \"127b14eaf3097b68191db8822fedf2af9ad50322960b0ea06ab04f5199d7d588\" returns successfully" Nov 1 01:00:29.966038 env[1442]: time="2025-11-01T01:00:29.965988277Z" level=info msg="StartContainer for \"dc12a97394bbea0186c88c55f62007bd400f399ee521a43a98ad709e297f7ebd\" returns successfully" Nov 1 01:00:30.232655 kubelet[2081]: E1101 01:00:30.232625 2081 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-e458e05b0a\" not found" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:30.239490 kubelet[2081]: E1101 01:00:30.239462 2081 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-e458e05b0a\" not found" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:30.244217 kubelet[2081]: E1101 01:00:30.244189 2081 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-e458e05b0a\" not found" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:31.247470 kubelet[2081]: E1101 01:00:31.247435 2081 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-e458e05b0a\" not found" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:31.247919 kubelet[2081]: E1101 01:00:31.247887 2081 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-e458e05b0a\" not found" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:31.349770 kubelet[2081]: I1101 01:00:31.349738 2081 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:32.169364 kubelet[2081]: E1101 01:00:32.169318 2081 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-e458e05b0a\" not found" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:32.249622 kubelet[2081]: E1101 01:00:32.249582 2081 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-e458e05b0a\" not found" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:32.250069 kubelet[2081]: E1101 01:00:32.250048 2081 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-e458e05b0a\" not found" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:32.695082 kubelet[2081]: I1101 01:00:32.695036 2081 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:32.755366 kubelet[2081]: I1101 01:00:32.755319 2081 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:32.761712 kubelet[2081]: E1101 01:00:32.761674 2081 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-e458e05b0a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:32.761712 kubelet[2081]: I1101 01:00:32.761708 2081 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:32.764151 kubelet[2081]: E1101 01:00:32.763962 2081 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-e458e05b0a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:32.764151 kubelet[2081]: I1101 01:00:32.763992 2081 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:32.765486 kubelet[2081]: E1101 01:00:32.765462 2081 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-e458e05b0a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:33.123001 kubelet[2081]: I1101 01:00:33.122961 2081 apiserver.go:52] "Watching apiserver" Nov 1 01:00:33.151003 kubelet[2081]: I1101 01:00:33.150968 2081 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 01:00:34.144810 kubelet[2081]: I1101 01:00:34.144769 2081 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:34.154051 kubelet[2081]: I1101 01:00:34.154010 2081 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:00:34.521641 systemd[1]: Reloading. Nov 1 01:00:34.606958 /usr/lib/systemd/system-generators/torcx-generator[2389]: time="2025-11-01T01:00:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 01:00:34.607000 /usr/lib/systemd/system-generators/torcx-generator[2389]: time="2025-11-01T01:00:34Z" level=info msg="torcx already run" Nov 1 01:00:34.759599 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 01:00:34.759622 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 01:00:34.776888 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:00:34.895825 systemd[1]: Stopping kubelet.service... Nov 1 01:00:34.913845 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 01:00:34.914074 systemd[1]: Stopped kubelet.service. Nov 1 01:00:34.916300 systemd[1]: Starting kubelet.service... Nov 1 01:00:35.301468 systemd[1]: Started kubelet.service. Nov 1 01:00:35.376937 kubelet[2455]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:00:35.381492 kubelet[2455]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:00:35.381492 kubelet[2455]: I1101 01:00:35.377604 2455 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:00:35.387898 kubelet[2455]: I1101 01:00:35.387864 2455 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 01:00:35.387898 kubelet[2455]: I1101 01:00:35.387889 2455 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:00:35.388105 kubelet[2455]: I1101 01:00:35.387917 2455 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 01:00:35.388105 kubelet[2455]: I1101 01:00:35.387925 2455 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:00:35.388297 kubelet[2455]: I1101 01:00:35.388280 2455 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 01:00:35.390120 kubelet[2455]: I1101 01:00:35.390074 2455 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 01:00:35.394472 kubelet[2455]: I1101 01:00:35.394441 2455 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:00:35.398898 kubelet[2455]: E1101 01:00:35.398863 2455 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:00:35.399024 kubelet[2455]: I1101 01:00:35.398917 2455 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 01:00:35.409034 kubelet[2455]: I1101 01:00:35.409000 2455 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 01:00:35.409310 kubelet[2455]: I1101 01:00:35.409280 2455 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:00:35.409624 kubelet[2455]: I1101 01:00:35.409314 2455 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-e458e05b0a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 01:00:35.409755 kubelet[2455]: I1101 01:00:35.409636 2455 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:00:35.409755 kubelet[2455]: I1101 01:00:35.409650 2455 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 01:00:35.409755 kubelet[2455]: I1101 01:00:35.409684 2455 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 01:00:35.410762 kubelet[2455]: I1101 01:00:35.410738 2455 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:00:35.410929 kubelet[2455]: I1101 01:00:35.410915 2455 kubelet.go:475] "Attempting to sync node with API server" Nov 1 01:00:35.410985 kubelet[2455]: I1101 01:00:35.410936 2455 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:00:35.410985 kubelet[2455]: I1101 01:00:35.410968 2455 kubelet.go:387] "Adding apiserver pod source" Nov 1 01:00:35.411075 kubelet[2455]: I1101 01:00:35.411002 2455 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:00:35.421549 kubelet[2455]: I1101 01:00:35.421505 2455 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 01:00:35.422353 kubelet[2455]: I1101 01:00:35.422333 2455 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 01:00:35.422446 kubelet[2455]: I1101 01:00:35.422431 2455 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 01:00:35.428664 kubelet[2455]: I1101 01:00:35.428643 2455 server.go:1262] "Started kubelet" Nov 1 01:00:35.434377 kubelet[2455]: I1101 01:00:35.434355 2455 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:00:35.454349 kubelet[2455]: I1101 01:00:35.454284 2455 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:00:35.455412 kubelet[2455]: I1101 01:00:35.455389 2455 server.go:310] "Adding debug handlers to kubelet server" Nov 1 01:00:35.468935 kubelet[2455]: I1101 01:00:35.468886 2455 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:00:35.469090 kubelet[2455]: I1101 01:00:35.468959 2455 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 01:00:35.469255 kubelet[2455]: I1101 01:00:35.469228 2455 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:00:35.469536 kubelet[2455]: I1101 01:00:35.469516 2455 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:00:35.471395 kubelet[2455]: I1101 01:00:35.471375 2455 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 01:00:35.472203 kubelet[2455]: I1101 01:00:35.472179 2455 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 01:00:35.472333 kubelet[2455]: I1101 01:00:35.472318 2455 reconciler.go:29] "Reconciler: start to sync state" Nov 1 01:00:35.479010 kubelet[2455]: I1101 01:00:35.478471 2455 factory.go:223] Registration of the systemd container factory successfully Nov 1 01:00:35.479010 kubelet[2455]: I1101 01:00:35.478576 2455 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:00:35.489943 kubelet[2455]: I1101 01:00:35.488795 2455 factory.go:223] Registration of the containerd container factory successfully Nov 1 01:00:35.500384 kubelet[2455]: I1101 01:00:35.499157 2455 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 01:00:35.500921 kubelet[2455]: I1101 01:00:35.500589 2455 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 01:00:35.500921 kubelet[2455]: I1101 01:00:35.500609 2455 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 01:00:35.500921 kubelet[2455]: I1101 01:00:35.500639 2455 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 01:00:35.500921 kubelet[2455]: E1101 01:00:35.500684 2455 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:00:35.568922 kubelet[2455]: I1101 01:00:35.568817 2455 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:00:35.569100 kubelet[2455]: I1101 01:00:35.569083 2455 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:00:35.569176 kubelet[2455]: I1101 01:00:35.569168 2455 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:00:35.569426 kubelet[2455]: I1101 01:00:35.569408 2455 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 01:00:35.569547 kubelet[2455]: I1101 01:00:35.569523 2455 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 01:00:35.569618 kubelet[2455]: I1101 01:00:35.569610 2455 policy_none.go:49] "None policy: Start" Nov 1 01:00:35.569692 kubelet[2455]: I1101 01:00:35.569684 2455 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 01:00:35.569756 kubelet[2455]: I1101 01:00:35.569747 2455 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 01:00:35.569936 kubelet[2455]: I1101 01:00:35.569924 2455 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 1 01:00:35.570005 kubelet[2455]: I1101 01:00:35.569998 2455 policy_none.go:47] "Start" Nov 1 01:00:35.577210 kubelet[2455]: E1101 01:00:35.577187 2455 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 01:00:35.580551 kubelet[2455]: I1101 01:00:35.580531 2455 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:00:35.581715 kubelet[2455]: I1101 01:00:35.581670 2455 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:00:35.585140 kubelet[2455]: I1101 01:00:35.585122 2455 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:00:35.589811 kubelet[2455]: E1101 01:00:35.589790 2455 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:00:35.604524 kubelet[2455]: I1101 01:00:35.602815 2455 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:35.607170 kubelet[2455]: I1101 01:00:35.605609 2455 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:35.609486 kubelet[2455]: I1101 01:00:35.608901 2455 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:35.614953 kubelet[2455]: I1101 01:00:35.614907 2455 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:00:35.623656 kubelet[2455]: I1101 01:00:35.623595 2455 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:00:35.627454 kubelet[2455]: I1101 01:00:35.627431 2455 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:00:35.627664 kubelet[2455]: E1101 01:00:35.627647 2455 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-e458e05b0a\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:35.698704 kubelet[2455]: I1101 01:00:35.698676 2455 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:35.711940 kubelet[2455]: I1101 01:00:35.711907 2455 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:35.712191 kubelet[2455]: I1101 01:00:35.712179 2455 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:35.773190 kubelet[2455]: I1101 01:00:35.773151 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2390a5efe54abfa2a84558824f18b7e7-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-e458e05b0a\" (UID: \"2390a5efe54abfa2a84558824f18b7e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:35.773190 kubelet[2455]: I1101 01:00:35.773195 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2390a5efe54abfa2a84558824f18b7e7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-e458e05b0a\" (UID: \"2390a5efe54abfa2a84558824f18b7e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:35.773451 kubelet[2455]: I1101 01:00:35.773220 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a1bf57e3da5fe5f81c041e01fac961d-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-e458e05b0a\" (UID: \"5a1bf57e3da5fe5f81c041e01fac961d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:35.773451 kubelet[2455]: I1101 01:00:35.773291 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a1bf57e3da5fe5f81c041e01fac961d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-e458e05b0a\" (UID: \"5a1bf57e3da5fe5f81c041e01fac961d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:35.773451 kubelet[2455]: I1101 01:00:35.773314 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2390a5efe54abfa2a84558824f18b7e7-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-e458e05b0a\" (UID: \"2390a5efe54abfa2a84558824f18b7e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:35.773451 kubelet[2455]: I1101 01:00:35.773338 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2390a5efe54abfa2a84558824f18b7e7-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-e458e05b0a\" (UID: \"2390a5efe54abfa2a84558824f18b7e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:35.773451 kubelet[2455]: I1101 01:00:35.773358 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23ac912251be65e31e473f7a659fdca9-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-e458e05b0a\" (UID: \"23ac912251be65e31e473f7a659fdca9\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:35.773613 kubelet[2455]: I1101 01:00:35.773378 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a1bf57e3da5fe5f81c041e01fac961d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-e458e05b0a\" (UID: \"5a1bf57e3da5fe5f81c041e01fac961d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:35.773613 kubelet[2455]: I1101 01:00:35.773398 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2390a5efe54abfa2a84558824f18b7e7-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-e458e05b0a\" (UID: \"2390a5efe54abfa2a84558824f18b7e7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:35.859696 sudo[2490]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 1 01:00:35.859993 sudo[2490]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Nov 1 01:00:36.421084 kubelet[2455]: I1101 01:00:36.421033 2455 apiserver.go:52] "Watching apiserver" Nov 1 01:00:36.434025 sudo[2490]: pam_unix(sudo:session): session closed for user root Nov 1 01:00:36.472348 kubelet[2455]: I1101 01:00:36.472309 2455 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 01:00:36.545404 kubelet[2455]: I1101 01:00:36.545371 2455 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:36.546116 kubelet[2455]: I1101 01:00:36.546086 2455 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:36.568038 kubelet[2455]: I1101 01:00:36.567989 2455 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:00:36.568358 kubelet[2455]: E1101 01:00:36.568329 2455 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-e458e05b0a\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:36.576739 kubelet[2455]: I1101 01:00:36.576711 2455 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:00:36.577050 kubelet[2455]: E1101 01:00:36.577000 2455 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-e458e05b0a\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-e458e05b0a" Nov 1 01:00:36.596112 kubelet[2455]: I1101 01:00:36.596040 2455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-e458e05b0a" podStartSLOduration=1.596020063 podStartE2EDuration="1.596020063s" podCreationTimestamp="2025-11-01 01:00:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:00:36.582448084 +0000 UTC m=+1.274886328" watchObservedRunningTime="2025-11-01 01:00:36.596020063 +0000 UTC m=+1.288458307" Nov 1 01:00:36.608534 kubelet[2455]: I1101 01:00:36.608455 2455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-e458e05b0a" podStartSLOduration=1.6084355270000001 podStartE2EDuration="1.608435527s" podCreationTimestamp="2025-11-01 01:00:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:00:36.596727472 +0000 UTC m=+1.289165816" watchObservedRunningTime="2025-11-01 01:00:36.608435527 +0000 UTC m=+1.300873771" Nov 1 01:00:36.625398 kubelet[2455]: I1101 01:00:36.625337 2455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-e458e05b0a" podStartSLOduration=2.62530575 podStartE2EDuration="2.62530575s" podCreationTimestamp="2025-11-01 01:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:00:36.609173937 +0000 UTC m=+1.301612181" watchObservedRunningTime="2025-11-01 01:00:36.62530575 +0000 UTC m=+1.317743994" Nov 1 01:00:37.995821 sudo[1807]: pam_unix(sudo:session): session closed for user root Nov 1 01:00:38.101509 sshd[1738]: pam_unix(sshd:session): session closed for user core Nov 1 01:00:38.105337 systemd-logind[1430]: Session 7 logged out. Waiting for processes to exit. Nov 1 01:00:38.105605 systemd[1]: sshd@4-10.200.4.7:22-10.200.16.10:44400.service: Deactivated successfully. Nov 1 01:00:38.106523 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 01:00:38.106661 systemd[1]: session-7.scope: Consumed 5.494s CPU time. Nov 1 01:00:38.107451 systemd-logind[1430]: Removed session 7. Nov 1 01:00:39.736259 kubelet[2455]: I1101 01:00:39.736198 2455 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 01:00:39.736819 env[1442]: time="2025-11-01T01:00:39.736743887Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 01:00:39.737209 kubelet[2455]: I1101 01:00:39.737075 2455 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 01:00:40.481547 systemd[1]: Created slice kubepods-besteffort-pod5865ab90_b43c_4ea6_a562_8a50739160c8.slice. Nov 1 01:00:40.499567 systemd[1]: Created slice kubepods-burstable-pod513493cc_dc29_4d19_b933_8f7df774d51b.slice. Nov 1 01:00:40.504209 kubelet[2455]: I1101 01:00:40.504176 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-host-proc-sys-net\") pod \"cilium-rnr27\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " pod="kube-system/cilium-rnr27" Nov 1 01:00:40.504501 kubelet[2455]: I1101 01:00:40.504481 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-host-proc-sys-kernel\") pod \"cilium-rnr27\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " pod="kube-system/cilium-rnr27" Nov 1 01:00:40.504681 kubelet[2455]: I1101 01:00:40.504661 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-bpf-maps\") pod \"cilium-rnr27\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " pod="kube-system/cilium-rnr27" Nov 1 01:00:40.504795 kubelet[2455]: I1101 01:00:40.504779 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-hostproc\") pod \"cilium-rnr27\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " pod="kube-system/cilium-rnr27" Nov 1 01:00:40.504920 kubelet[2455]: I1101 01:00:40.504904 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-cni-path\") pod \"cilium-rnr27\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " pod="kube-system/cilium-rnr27" Nov 1 01:00:40.505068 kubelet[2455]: I1101 01:00:40.505053 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-xtables-lock\") pod \"cilium-rnr27\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " pod="kube-system/cilium-rnr27" Nov 1 01:00:40.505205 kubelet[2455]: I1101 01:00:40.505188 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-cilium-run\") pod \"cilium-rnr27\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " pod="kube-system/cilium-rnr27" Nov 1 01:00:40.505462 kubelet[2455]: I1101 01:00:40.505443 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-etc-cni-netd\") pod \"cilium-rnr27\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " pod="kube-system/cilium-rnr27" Nov 1 01:00:40.505615 kubelet[2455]: I1101 01:00:40.505596 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/513493cc-dc29-4d19-b933-8f7df774d51b-hubble-tls\") pod \"cilium-rnr27\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " pod="kube-system/cilium-rnr27" Nov 1 01:00:40.505764 kubelet[2455]: I1101 01:00:40.505745 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd2m7\" (UniqueName: \"kubernetes.io/projected/513493cc-dc29-4d19-b933-8f7df774d51b-kube-api-access-rd2m7\") pod \"cilium-rnr27\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " pod="kube-system/cilium-rnr27" Nov 1 01:00:40.505898 kubelet[2455]: I1101 01:00:40.505883 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5865ab90-b43c-4ea6-a562-8a50739160c8-kube-proxy\") pod \"kube-proxy-qxjh4\" (UID: \"5865ab90-b43c-4ea6-a562-8a50739160c8\") " pod="kube-system/kube-proxy-qxjh4" Nov 1 01:00:40.506035 kubelet[2455]: I1101 01:00:40.506017 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5865ab90-b43c-4ea6-a562-8a50739160c8-xtables-lock\") pod \"kube-proxy-qxjh4\" (UID: \"5865ab90-b43c-4ea6-a562-8a50739160c8\") " pod="kube-system/kube-proxy-qxjh4" Nov 1 01:00:40.506167 kubelet[2455]: I1101 01:00:40.506150 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-cilium-cgroup\") pod \"cilium-rnr27\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " pod="kube-system/cilium-rnr27" Nov 1 01:00:40.506318 kubelet[2455]: I1101 01:00:40.506301 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/513493cc-dc29-4d19-b933-8f7df774d51b-cilium-config-path\") pod \"cilium-rnr27\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " pod="kube-system/cilium-rnr27" Nov 1 01:00:40.506453 kubelet[2455]: I1101 01:00:40.506437 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5865ab90-b43c-4ea6-a562-8a50739160c8-lib-modules\") pod \"kube-proxy-qxjh4\" (UID: \"5865ab90-b43c-4ea6-a562-8a50739160c8\") " pod="kube-system/kube-proxy-qxjh4" Nov 1 01:00:40.506572 kubelet[2455]: I1101 01:00:40.506555 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdr4s\" (UniqueName: \"kubernetes.io/projected/5865ab90-b43c-4ea6-a562-8a50739160c8-kube-api-access-fdr4s\") pod \"kube-proxy-qxjh4\" (UID: \"5865ab90-b43c-4ea6-a562-8a50739160c8\") " pod="kube-system/kube-proxy-qxjh4" Nov 1 01:00:40.506691 kubelet[2455]: I1101 01:00:40.506673 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-lib-modules\") pod \"cilium-rnr27\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " pod="kube-system/cilium-rnr27" Nov 1 01:00:40.506809 kubelet[2455]: I1101 01:00:40.506794 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/513493cc-dc29-4d19-b933-8f7df774d51b-clustermesh-secrets\") pod \"cilium-rnr27\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " pod="kube-system/cilium-rnr27" Nov 1 01:00:40.609525 kubelet[2455]: I1101 01:00:40.609473 2455 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 01:00:40.627661 kubelet[2455]: E1101 01:00:40.627628 2455 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 1 01:00:40.627879 kubelet[2455]: E1101 01:00:40.627864 2455 projected.go:196] Error preparing data for projected volume kube-api-access-fdr4s for pod kube-system/kube-proxy-qxjh4: configmap "kube-root-ca.crt" not found Nov 1 01:00:40.628073 kubelet[2455]: E1101 01:00:40.628055 2455 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5865ab90-b43c-4ea6-a562-8a50739160c8-kube-api-access-fdr4s podName:5865ab90-b43c-4ea6-a562-8a50739160c8 nodeName:}" failed. No retries permitted until 2025-11-01 01:00:41.128025966 +0000 UTC m=+5.820464210 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fdr4s" (UniqueName: "kubernetes.io/projected/5865ab90-b43c-4ea6-a562-8a50739160c8-kube-api-access-fdr4s") pod "kube-proxy-qxjh4" (UID: "5865ab90-b43c-4ea6-a562-8a50739160c8") : configmap "kube-root-ca.crt" not found Nov 1 01:00:40.633460 kubelet[2455]: E1101 01:00:40.633426 2455 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 1 01:00:40.633619 kubelet[2455]: E1101 01:00:40.633605 2455 projected.go:196] Error preparing data for projected volume kube-api-access-rd2m7 for pod kube-system/cilium-rnr27: configmap "kube-root-ca.crt" not found Nov 1 01:00:40.633752 kubelet[2455]: E1101 01:00:40.633739 2455 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/513493cc-dc29-4d19-b933-8f7df774d51b-kube-api-access-rd2m7 podName:513493cc-dc29-4d19-b933-8f7df774d51b nodeName:}" failed. No retries permitted until 2025-11-01 01:00:41.133717635 +0000 UTC m=+5.826155979 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rd2m7" (UniqueName: "kubernetes.io/projected/513493cc-dc29-4d19-b933-8f7df774d51b-kube-api-access-rd2m7") pod "cilium-rnr27" (UID: "513493cc-dc29-4d19-b933-8f7df774d51b") : configmap "kube-root-ca.crt" not found Nov 1 01:00:40.872151 systemd[1]: Created slice kubepods-besteffort-pod3d9f87cf_6653_4339_85c8_53ca43ebee6b.slice. Nov 1 01:00:40.916406 kubelet[2455]: I1101 01:00:40.916360 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmccj\" (UniqueName: \"kubernetes.io/projected/3d9f87cf-6653-4339-85c8-53ca43ebee6b-kube-api-access-fmccj\") pod \"cilium-operator-6f9c7c5859-ldsxn\" (UID: \"3d9f87cf-6653-4339-85c8-53ca43ebee6b\") " pod="kube-system/cilium-operator-6f9c7c5859-ldsxn" Nov 1 01:00:40.917023 kubelet[2455]: I1101 01:00:40.916982 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d9f87cf-6653-4339-85c8-53ca43ebee6b-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-ldsxn\" (UID: \"3d9f87cf-6653-4339-85c8-53ca43ebee6b\") " pod="kube-system/cilium-operator-6f9c7c5859-ldsxn" Nov 1 01:00:41.187586 env[1442]: time="2025-11-01T01:00:41.187217303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-ldsxn,Uid:3d9f87cf-6653-4339-85c8-53ca43ebee6b,Namespace:kube-system,Attempt:0,}" Nov 1 01:00:41.231304 env[1442]: time="2025-11-01T01:00:41.231065424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:00:41.231304 env[1442]: time="2025-11-01T01:00:41.231115625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:00:41.231304 env[1442]: time="2025-11-01T01:00:41.231131525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:00:41.231805 env[1442]: time="2025-11-01T01:00:41.231747133Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/55b6ace6ba438f3c8e87fe1c332666f9e4f0a3736acfbc91900bf90fdbbf56e8 pid=2536 runtime=io.containerd.runc.v2 Nov 1 01:00:41.245321 systemd[1]: Started cri-containerd-55b6ace6ba438f3c8e87fe1c332666f9e4f0a3736acfbc91900bf90fdbbf56e8.scope. Nov 1 01:00:41.290193 env[1442]: time="2025-11-01T01:00:41.290145127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-ldsxn,Uid:3d9f87cf-6653-4339-85c8-53ca43ebee6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"55b6ace6ba438f3c8e87fe1c332666f9e4f0a3736acfbc91900bf90fdbbf56e8\"" Nov 1 01:00:41.292232 env[1442]: time="2025-11-01T01:00:41.292201451Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 01:00:41.393810 env[1442]: time="2025-11-01T01:00:41.393751558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qxjh4,Uid:5865ab90-b43c-4ea6-a562-8a50739160c8,Namespace:kube-system,Attempt:0,}" Nov 1 01:00:41.407650 env[1442]: time="2025-11-01T01:00:41.407605923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rnr27,Uid:513493cc-dc29-4d19-b933-8f7df774d51b,Namespace:kube-system,Attempt:0,}" Nov 1 01:00:41.440772 env[1442]: time="2025-11-01T01:00:41.438072185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:00:41.440772 env[1442]: time="2025-11-01T01:00:41.438104285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:00:41.440772 env[1442]: time="2025-11-01T01:00:41.438113485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:00:41.440772 env[1442]: time="2025-11-01T01:00:41.438705692Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f6504e93e5071dcabc5e7d36b5fd20169927a26aafcf254696e5ee6e24ed9d9 pid=2579 runtime=io.containerd.runc.v2 Nov 1 01:00:41.454073 systemd[1]: Started cri-containerd-9f6504e93e5071dcabc5e7d36b5fd20169927a26aafcf254696e5ee6e24ed9d9.scope. Nov 1 01:00:41.474259 env[1442]: time="2025-11-01T01:00:41.473970011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:00:41.474259 env[1442]: time="2025-11-01T01:00:41.474024012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:00:41.474259 env[1442]: time="2025-11-01T01:00:41.474034312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:00:41.475196 env[1442]: time="2025-11-01T01:00:41.474294315Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a pid=2614 runtime=io.containerd.runc.v2 Nov 1 01:00:41.494962 systemd[1]: Started cri-containerd-d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a.scope. Nov 1 01:00:41.508098 env[1442]: time="2025-11-01T01:00:41.508048716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qxjh4,Uid:5865ab90-b43c-4ea6-a562-8a50739160c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f6504e93e5071dcabc5e7d36b5fd20169927a26aafcf254696e5ee6e24ed9d9\"" Nov 1 01:00:41.516660 env[1442]: time="2025-11-01T01:00:41.516615518Z" level=info msg="CreateContainer within sandbox \"9f6504e93e5071dcabc5e7d36b5fd20169927a26aafcf254696e5ee6e24ed9d9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 01:00:41.531309 env[1442]: time="2025-11-01T01:00:41.530884788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rnr27,Uid:513493cc-dc29-4d19-b933-8f7df774d51b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a\"" Nov 1 01:00:41.568803 env[1442]: time="2025-11-01T01:00:41.568759138Z" level=info msg="CreateContainer within sandbox \"9f6504e93e5071dcabc5e7d36b5fd20169927a26aafcf254696e5ee6e24ed9d9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"de35aca4181dfcc6435fa735da865d9c3364229eccab5237ce1932bb7bec489d\"" Nov 1 01:00:41.570111 env[1442]: time="2025-11-01T01:00:41.570064153Z" level=info msg="StartContainer for \"de35aca4181dfcc6435fa735da865d9c3364229eccab5237ce1932bb7bec489d\"" Nov 1 01:00:41.590304 systemd[1]: Started cri-containerd-de35aca4181dfcc6435fa735da865d9c3364229eccab5237ce1932bb7bec489d.scope. Nov 1 01:00:41.642534 env[1442]: time="2025-11-01T01:00:41.642483414Z" level=info msg="StartContainer for \"de35aca4181dfcc6435fa735da865d9c3364229eccab5237ce1932bb7bec489d\" returns successfully" Nov 1 01:00:43.203630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1827325082.mount: Deactivated successfully. Nov 1 01:00:43.432204 kubelet[2455]: I1101 01:00:43.432131 2455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qxjh4" podStartSLOduration=3.432097841 podStartE2EDuration="3.432097841s" podCreationTimestamp="2025-11-01 01:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:00:42.575770168 +0000 UTC m=+7.268208512" watchObservedRunningTime="2025-11-01 01:00:43.432097841 +0000 UTC m=+8.124536185" Nov 1 01:00:43.944488 env[1442]: time="2025-11-01T01:00:43.944435789Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:43.951985 env[1442]: time="2025-11-01T01:00:43.951928675Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:43.956419 env[1442]: time="2025-11-01T01:00:43.956370825Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:43.956868 env[1442]: time="2025-11-01T01:00:43.956831530Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 1 01:00:43.960320 env[1442]: time="2025-11-01T01:00:43.960002567Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 01:00:43.969836 env[1442]: time="2025-11-01T01:00:43.969794778Z" level=info msg="CreateContainer within sandbox \"55b6ace6ba438f3c8e87fe1c332666f9e4f0a3736acfbc91900bf90fdbbf56e8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 01:00:44.008966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3929087337.mount: Deactivated successfully. Nov 1 01:00:44.023449 env[1442]: time="2025-11-01T01:00:44.023388985Z" level=info msg="CreateContainer within sandbox \"55b6ace6ba438f3c8e87fe1c332666f9e4f0a3736acfbc91900bf90fdbbf56e8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866\"" Nov 1 01:00:44.024433 env[1442]: time="2025-11-01T01:00:44.024046193Z" level=info msg="StartContainer for \"3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866\"" Nov 1 01:00:44.042765 systemd[1]: Started cri-containerd-3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866.scope. Nov 1 01:00:44.077797 env[1442]: time="2025-11-01T01:00:44.077741493Z" level=info msg="StartContainer for \"3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866\" returns successfully" Nov 1 01:00:47.433116 kubelet[2455]: I1101 01:00:47.433040 2455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-ldsxn" podStartSLOduration=4.766473422 podStartE2EDuration="7.433020624s" podCreationTimestamp="2025-11-01 01:00:40 +0000 UTC" firstStartedPulling="2025-11-01 01:00:41.291477642 +0000 UTC m=+5.983915886" lastFinishedPulling="2025-11-01 01:00:43.958024844 +0000 UTC m=+8.650463088" observedRunningTime="2025-11-01 01:00:44.65380564 +0000 UTC m=+9.346243884" watchObservedRunningTime="2025-11-01 01:00:47.433020624 +0000 UTC m=+12.125458968" Nov 1 01:00:49.908245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3082382098.mount: Deactivated successfully. Nov 1 01:00:52.697083 env[1442]: time="2025-11-01T01:00:52.697024552Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:52.705114 env[1442]: time="2025-11-01T01:00:52.705072129Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:52.708760 env[1442]: time="2025-11-01T01:00:52.708713064Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:52.709227 env[1442]: time="2025-11-01T01:00:52.709192169Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 1 01:00:52.717863 env[1442]: time="2025-11-01T01:00:52.717826352Z" level=info msg="CreateContainer within sandbox \"d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 01:00:52.746297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3694985130.mount: Deactivated successfully. Nov 1 01:00:52.762350 env[1442]: time="2025-11-01T01:00:52.762298780Z" level=info msg="CreateContainer within sandbox \"d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0\"" Nov 1 01:00:52.762973 env[1442]: time="2025-11-01T01:00:52.762941887Z" level=info msg="StartContainer for \"64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0\"" Nov 1 01:00:52.788007 systemd[1]: Started cri-containerd-64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0.scope. Nov 1 01:00:52.821752 env[1442]: time="2025-11-01T01:00:52.821694453Z" level=info msg="StartContainer for \"64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0\" returns successfully" Nov 1 01:00:52.826874 systemd[1]: cri-containerd-64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0.scope: Deactivated successfully. Nov 1 01:00:53.738723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0-rootfs.mount: Deactivated successfully. Nov 1 01:00:57.271344 env[1442]: time="2025-11-01T01:00:57.271272569Z" level=info msg="shim disconnected" id=64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0 Nov 1 01:00:57.271344 env[1442]: time="2025-11-01T01:00:57.271337569Z" level=warning msg="cleaning up after shim disconnected" id=64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0 namespace=k8s.io Nov 1 01:00:57.271344 env[1442]: time="2025-11-01T01:00:57.271349570Z" level=info msg="cleaning up dead shim" Nov 1 01:00:57.279312 env[1442]: time="2025-11-01T01:00:57.279266340Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:00:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2912 runtime=io.containerd.runc.v2\n" Nov 1 01:00:57.617136 env[1442]: time="2025-11-01T01:00:57.617001226Z" level=info msg="CreateContainer within sandbox \"d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 01:00:57.644665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2489198818.mount: Deactivated successfully. Nov 1 01:00:57.661207 env[1442]: time="2025-11-01T01:00:57.661158016Z" level=info msg="CreateContainer within sandbox \"d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204\"" Nov 1 01:00:57.663111 env[1442]: time="2025-11-01T01:00:57.663072233Z" level=info msg="StartContainer for \"487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204\"" Nov 1 01:00:57.686316 systemd[1]: Started cri-containerd-487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204.scope. Nov 1 01:00:57.718566 env[1442]: time="2025-11-01T01:00:57.718508823Z" level=info msg="StartContainer for \"487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204\" returns successfully" Nov 1 01:00:57.731160 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 01:00:57.731510 systemd[1]: Stopped systemd-sysctl.service. Nov 1 01:00:57.731703 systemd[1]: Stopping systemd-sysctl.service... Nov 1 01:00:57.735949 systemd[1]: Starting systemd-sysctl.service... Nov 1 01:00:57.737305 systemd[1]: cri-containerd-487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204.scope: Deactivated successfully. Nov 1 01:00:57.760883 systemd[1]: Finished systemd-sysctl.service. Nov 1 01:00:57.785769 env[1442]: time="2025-11-01T01:00:57.785707818Z" level=info msg="shim disconnected" id=487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204 Nov 1 01:00:57.785769 env[1442]: time="2025-11-01T01:00:57.785763118Z" level=warning msg="cleaning up after shim disconnected" id=487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204 namespace=k8s.io Nov 1 01:00:57.785769 env[1442]: time="2025-11-01T01:00:57.785775218Z" level=info msg="cleaning up dead shim" Nov 1 01:00:57.798142 env[1442]: time="2025-11-01T01:00:57.798084327Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:00:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2975 runtime=io.containerd.runc.v2\n" Nov 1 01:00:58.617642 env[1442]: time="2025-11-01T01:00:58.617591985Z" level=info msg="CreateContainer within sandbox \"d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 01:00:58.640587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204-rootfs.mount: Deactivated successfully. Nov 1 01:00:58.651606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1410208354.mount: Deactivated successfully. Nov 1 01:00:58.665811 env[1442]: time="2025-11-01T01:00:58.665763704Z" level=info msg="CreateContainer within sandbox \"d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b\"" Nov 1 01:00:58.667779 env[1442]: time="2025-11-01T01:00:58.666615411Z" level=info msg="StartContainer for \"324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b\"" Nov 1 01:00:58.692251 systemd[1]: Started cri-containerd-324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b.scope. Nov 1 01:00:58.728451 systemd[1]: cri-containerd-324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b.scope: Deactivated successfully. Nov 1 01:00:58.730464 env[1442]: time="2025-11-01T01:00:58.730422866Z" level=info msg="StartContainer for \"324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b\" returns successfully" Nov 1 01:00:58.772998 env[1442]: time="2025-11-01T01:00:58.772938536Z" level=info msg="shim disconnected" id=324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b Nov 1 01:00:58.772998 env[1442]: time="2025-11-01T01:00:58.772994636Z" level=warning msg="cleaning up after shim disconnected" id=324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b namespace=k8s.io Nov 1 01:00:58.772998 env[1442]: time="2025-11-01T01:00:58.773005537Z" level=info msg="cleaning up dead shim" Nov 1 01:00:58.782021 env[1442]: time="2025-11-01T01:00:58.781975315Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:00:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3033 runtime=io.containerd.runc.v2\n" Nov 1 01:00:59.624784 env[1442]: time="2025-11-01T01:00:59.624736358Z" level=info msg="CreateContainer within sandbox \"d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 01:00:59.640640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b-rootfs.mount: Deactivated successfully. Nov 1 01:00:59.760991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount826327337.mount: Deactivated successfully. Nov 1 01:00:59.851152 env[1442]: time="2025-11-01T01:00:59.851091196Z" level=info msg="CreateContainer within sandbox \"d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17\"" Nov 1 01:00:59.851762 env[1442]: time="2025-11-01T01:00:59.851728501Z" level=info msg="StartContainer for \"1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17\"" Nov 1 01:00:59.871490 systemd[1]: Started cri-containerd-1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17.scope. Nov 1 01:00:59.899210 systemd[1]: cri-containerd-1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17.scope: Deactivated successfully. Nov 1 01:00:59.908967 env[1442]: time="2025-11-01T01:00:59.908913391Z" level=info msg="StartContainer for \"1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17\" returns successfully" Nov 1 01:01:00.640706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17-rootfs.mount: Deactivated successfully. Nov 1 01:01:01.087205 env[1442]: time="2025-11-01T01:01:01.087138518Z" level=info msg="shim disconnected" id=1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17 Nov 1 01:01:01.087806 env[1442]: time="2025-11-01T01:01:01.087213019Z" level=warning msg="cleaning up after shim disconnected" id=1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17 namespace=k8s.io Nov 1 01:01:01.087806 env[1442]: time="2025-11-01T01:01:01.087227819Z" level=info msg="cleaning up dead shim" Nov 1 01:01:01.096141 env[1442]: time="2025-11-01T01:01:01.096088592Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:01:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3090 runtime=io.containerd.runc.v2\n" Nov 1 01:01:01.840124 env[1442]: time="2025-11-01T01:01:01.840068263Z" level=info msg="CreateContainer within sandbox \"d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 01:01:02.098994 env[1442]: time="2025-11-01T01:01:02.098577494Z" level=info msg="CreateContainer within sandbox \"d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2\"" Nov 1 01:01:02.100745 env[1442]: time="2025-11-01T01:01:02.100708512Z" level=info msg="StartContainer for \"067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2\"" Nov 1 01:01:02.131395 systemd[1]: Started cri-containerd-067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2.scope. Nov 1 01:01:02.166893 env[1442]: time="2025-11-01T01:01:02.166833252Z" level=info msg="StartContainer for \"067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2\" returns successfully" Nov 1 01:01:02.305230 kubelet[2455]: I1101 01:01:02.305196 2455 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 1 01:01:02.367174 systemd[1]: Created slice kubepods-burstable-podb2b11fd1_f66b_405e_a442_7a08b60b8a33.slice. Nov 1 01:01:02.380957 systemd[1]: Created slice kubepods-burstable-pod15e9da74_f31d_4b8b_bc44_81a8b57446d4.slice. Nov 1 01:01:02.479799 kubelet[2455]: I1101 01:01:02.479738 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2b11fd1-f66b-405e-a442-7a08b60b8a33-config-volume\") pod \"coredns-66bc5c9577-qk7s7\" (UID: \"b2b11fd1-f66b-405e-a442-7a08b60b8a33\") " pod="kube-system/coredns-66bc5c9577-qk7s7" Nov 1 01:01:02.480001 kubelet[2455]: I1101 01:01:02.479808 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r8bz\" (UniqueName: \"kubernetes.io/projected/b2b11fd1-f66b-405e-a442-7a08b60b8a33-kube-api-access-5r8bz\") pod \"coredns-66bc5c9577-qk7s7\" (UID: \"b2b11fd1-f66b-405e-a442-7a08b60b8a33\") " pod="kube-system/coredns-66bc5c9577-qk7s7" Nov 1 01:01:02.480001 kubelet[2455]: I1101 01:01:02.479835 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rwlm\" (UniqueName: \"kubernetes.io/projected/15e9da74-f31d-4b8b-bc44-81a8b57446d4-kube-api-access-9rwlm\") pod \"coredns-66bc5c9577-h9k7t\" (UID: \"15e9da74-f31d-4b8b-bc44-81a8b57446d4\") " pod="kube-system/coredns-66bc5c9577-h9k7t" Nov 1 01:01:02.480001 kubelet[2455]: I1101 01:01:02.479871 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15e9da74-f31d-4b8b-bc44-81a8b57446d4-config-volume\") pod \"coredns-66bc5c9577-h9k7t\" (UID: \"15e9da74-f31d-4b8b-bc44-81a8b57446d4\") " pod="kube-system/coredns-66bc5c9577-h9k7t" Nov 1 01:01:02.795514 env[1442]: time="2025-11-01T01:01:02.795467786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qk7s7,Uid:b2b11fd1-f66b-405e-a442-7a08b60b8a33,Namespace:kube-system,Attempt:0,}" Nov 1 01:01:02.838597 env[1442]: time="2025-11-01T01:01:02.838551038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-h9k7t,Uid:15e9da74-f31d-4b8b-bc44-81a8b57446d4,Namespace:kube-system,Attempt:0,}" Nov 1 01:01:03.008141 systemd[1]: run-containerd-runc-k8s.io-067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2-runc.d2xi69.mount: Deactivated successfully. Nov 1 01:01:04.684387 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Nov 1 01:01:04.684541 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Nov 1 01:01:04.680988 systemd-networkd[1587]: cilium_host: Link UP Nov 1 01:01:04.681164 systemd-networkd[1587]: cilium_net: Link UP Nov 1 01:01:04.681409 systemd-networkd[1587]: cilium_net: Gained carrier Nov 1 01:01:04.690331 systemd-networkd[1587]: cilium_host: Gained carrier Nov 1 01:01:04.901318 systemd-networkd[1587]: cilium_vxlan: Link UP Nov 1 01:01:04.901331 systemd-networkd[1587]: cilium_vxlan: Gained carrier Nov 1 01:01:05.169267 kernel: NET: Registered PF_ALG protocol family Nov 1 01:01:05.342496 systemd-networkd[1587]: cilium_net: Gained IPv6LL Nov 1 01:01:05.662366 systemd-networkd[1587]: cilium_host: Gained IPv6LL Nov 1 01:01:05.998467 systemd-networkd[1587]: lxc_health: Link UP Nov 1 01:01:06.004053 systemd-networkd[1587]: lxc_health: Gained carrier Nov 1 01:01:06.004251 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 01:01:06.238400 systemd-networkd[1587]: cilium_vxlan: Gained IPv6LL Nov 1 01:01:06.518328 systemd-networkd[1587]: lxc39749853a1e8: Link UP Nov 1 01:01:06.530266 kernel: eth0: renamed from tmp217bc Nov 1 01:01:06.541477 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc39749853a1e8: link becomes ready Nov 1 01:01:06.540702 systemd-networkd[1587]: lxc39749853a1e8: Gained carrier Nov 1 01:01:06.571582 systemd-networkd[1587]: lxc379f71b7e3d7: Link UP Nov 1 01:01:06.585436 kernel: eth0: renamed from tmpc839a Nov 1 01:01:06.599302 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc379f71b7e3d7: link becomes ready Nov 1 01:01:06.598548 systemd-networkd[1587]: lxc379f71b7e3d7: Gained carrier Nov 1 01:01:07.438291 kubelet[2455]: I1101 01:01:07.438210 2455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rnr27" podStartSLOduration=16.260591896 podStartE2EDuration="27.438185666s" podCreationTimestamp="2025-11-01 01:00:40 +0000 UTC" firstStartedPulling="2025-11-01 01:00:41.532885211 +0000 UTC m=+6.225323455" lastFinishedPulling="2025-11-01 01:00:52.710478881 +0000 UTC m=+17.402917225" observedRunningTime="2025-11-01 01:01:02.651690512 +0000 UTC m=+27.344128756" watchObservedRunningTime="2025-11-01 01:01:07.438185666 +0000 UTC m=+32.130623910" Nov 1 01:01:07.454496 systemd-networkd[1587]: lxc_health: Gained IPv6LL Nov 1 01:01:07.710414 systemd-networkd[1587]: lxc379f71b7e3d7: Gained IPv6LL Nov 1 01:01:08.286533 systemd-networkd[1587]: lxc39749853a1e8: Gained IPv6LL Nov 1 01:01:10.308336 env[1442]: time="2025-11-01T01:01:10.308261051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:10.308855 env[1442]: time="2025-11-01T01:01:10.308821955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:10.308966 env[1442]: time="2025-11-01T01:01:10.308943756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:10.309183 env[1442]: time="2025-11-01T01:01:10.309156558Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/217bc415df7c70f361fd537a55b500a0d25994d6b817d0444fc75e07d712b5f5 pid=3633 runtime=io.containerd.runc.v2 Nov 1 01:01:10.345779 systemd[1]: run-containerd-runc-k8s.io-217bc415df7c70f361fd537a55b500a0d25994d6b817d0444fc75e07d712b5f5-runc.Io8kVL.mount: Deactivated successfully. Nov 1 01:01:10.350307 systemd[1]: Started cri-containerd-217bc415df7c70f361fd537a55b500a0d25994d6b817d0444fc75e07d712b5f5.scope. Nov 1 01:01:10.363246 env[1442]: time="2025-11-01T01:01:10.363157551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:10.363555 env[1442]: time="2025-11-01T01:01:10.363468754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:10.363555 env[1442]: time="2025-11-01T01:01:10.363496354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:10.364316 env[1442]: time="2025-11-01T01:01:10.364257159Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c839aebb65fb5c5c92e9821fb0ed8feda58c5059a8facf764545d5b3ce4f26e7 pid=3661 runtime=io.containerd.runc.v2 Nov 1 01:01:10.406897 systemd[1]: Started cri-containerd-c839aebb65fb5c5c92e9821fb0ed8feda58c5059a8facf764545d5b3ce4f26e7.scope. Nov 1 01:01:10.463451 env[1442]: time="2025-11-01T01:01:10.463402682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qk7s7,Uid:b2b11fd1-f66b-405e-a442-7a08b60b8a33,Namespace:kube-system,Attempt:0,} returns sandbox id \"217bc415df7c70f361fd537a55b500a0d25994d6b817d0444fc75e07d712b5f5\"" Nov 1 01:01:10.472425 env[1442]: time="2025-11-01T01:01:10.472375847Z" level=info msg="CreateContainer within sandbox \"217bc415df7c70f361fd537a55b500a0d25994d6b817d0444fc75e07d712b5f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:01:10.513314 env[1442]: time="2025-11-01T01:01:10.513263945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-h9k7t,Uid:15e9da74-f31d-4b8b-bc44-81a8b57446d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c839aebb65fb5c5c92e9821fb0ed8feda58c5059a8facf764545d5b3ce4f26e7\"" Nov 1 01:01:10.642972 env[1442]: time="2025-11-01T01:01:10.642827490Z" level=info msg="CreateContainer within sandbox \"c839aebb65fb5c5c92e9821fb0ed8feda58c5059a8facf764545d5b3ce4f26e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:01:10.947398 env[1442]: time="2025-11-01T01:01:10.947131507Z" level=info msg="CreateContainer within sandbox \"217bc415df7c70f361fd537a55b500a0d25994d6b817d0444fc75e07d712b5f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"367626489c93da6949151aa351fd32521eacaa7cd1e00657699a6a5bbcd212b6\"" Nov 1 01:01:10.948026 env[1442]: time="2025-11-01T01:01:10.947964513Z" level=info msg="StartContainer for \"367626489c93da6949151aa351fd32521eacaa7cd1e00657699a6a5bbcd212b6\"" Nov 1 01:01:10.967633 systemd[1]: Started cri-containerd-367626489c93da6949151aa351fd32521eacaa7cd1e00657699a6a5bbcd212b6.scope. Nov 1 01:01:11.043039 env[1442]: time="2025-11-01T01:01:11.042958902Z" level=info msg="StartContainer for \"367626489c93da6949151aa351fd32521eacaa7cd1e00657699a6a5bbcd212b6\" returns successfully" Nov 1 01:01:11.091731 env[1442]: time="2025-11-01T01:01:11.091664652Z" level=info msg="CreateContainer within sandbox \"c839aebb65fb5c5c92e9821fb0ed8feda58c5059a8facf764545d5b3ce4f26e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f8f69ffe8833a879946937752ed9e7c7db748596dc2d3f627a01acfa5885a84\"" Nov 1 01:01:11.092830 env[1442]: time="2025-11-01T01:01:11.092780660Z" level=info msg="StartContainer for \"8f8f69ffe8833a879946937752ed9e7c7db748596dc2d3f627a01acfa5885a84\"" Nov 1 01:01:11.113418 systemd[1]: Started cri-containerd-8f8f69ffe8833a879946937752ed9e7c7db748596dc2d3f627a01acfa5885a84.scope. Nov 1 01:01:11.157085 env[1442]: time="2025-11-01T01:01:11.157037122Z" level=info msg="StartContainer for \"8f8f69ffe8833a879946937752ed9e7c7db748596dc2d3f627a01acfa5885a84\" returns successfully" Nov 1 01:01:11.670105 kubelet[2455]: I1101 01:01:11.670022 2455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-h9k7t" podStartSLOduration=31.670000712 podStartE2EDuration="31.670000712s" podCreationTimestamp="2025-11-01 01:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:01:11.669298707 +0000 UTC m=+36.361737051" watchObservedRunningTime="2025-11-01 01:01:11.670000712 +0000 UTC m=+36.362438956" Nov 1 01:01:11.731793 kubelet[2455]: I1101 01:01:11.731699 2455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qk7s7" podStartSLOduration=31.731674355 podStartE2EDuration="31.731674355s" podCreationTimestamp="2025-11-01 01:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:01:11.73088435 +0000 UTC m=+36.423322594" watchObservedRunningTime="2025-11-01 01:01:11.731674355 +0000 UTC m=+36.424112599" Nov 1 01:02:39.262264 systemd[1]: Started sshd@5-10.200.4.7:22-10.200.16.10:50534.service. Nov 1 01:02:39.858303 sshd[3802]: Accepted publickey for core from 10.200.16.10 port 50534 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:02:39.859823 sshd[3802]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:02:39.864870 systemd[1]: Started session-8.scope. Nov 1 01:02:39.865538 systemd-logind[1430]: New session 8 of user core. Nov 1 01:02:40.468584 sshd[3802]: pam_unix(sshd:session): session closed for user core Nov 1 01:02:40.472158 systemd-logind[1430]: Session 8 logged out. Waiting for processes to exit. Nov 1 01:02:40.472405 systemd[1]: sshd@5-10.200.4.7:22-10.200.16.10:50534.service: Deactivated successfully. Nov 1 01:02:40.473329 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 01:02:40.474537 systemd-logind[1430]: Removed session 8. Nov 1 01:02:45.568750 systemd[1]: Started sshd@6-10.200.4.7:22-10.200.16.10:60812.service. Nov 1 01:02:46.160631 sshd[3817]: Accepted publickey for core from 10.200.16.10 port 60812 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:02:46.162231 sshd[3817]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:02:46.167978 systemd-logind[1430]: New session 9 of user core. Nov 1 01:02:46.168652 systemd[1]: Started session-9.scope. Nov 1 01:02:46.651281 sshd[3817]: pam_unix(sshd:session): session closed for user core Nov 1 01:02:46.654514 systemd[1]: sshd@6-10.200.4.7:22-10.200.16.10:60812.service: Deactivated successfully. Nov 1 01:02:46.655596 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 01:02:46.656494 systemd-logind[1430]: Session 9 logged out. Waiting for processes to exit. Nov 1 01:02:46.657358 systemd-logind[1430]: Removed session 9. Nov 1 01:02:51.751667 systemd[1]: Started sshd@7-10.200.4.7:22-10.200.16.10:47824.service. Nov 1 01:02:52.341374 sshd[3830]: Accepted publickey for core from 10.200.16.10 port 47824 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:02:52.342934 sshd[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:02:52.347965 systemd-logind[1430]: New session 10 of user core. Nov 1 01:02:52.348686 systemd[1]: Started session-10.scope. Nov 1 01:02:52.818326 sshd[3830]: pam_unix(sshd:session): session closed for user core Nov 1 01:02:52.821420 systemd[1]: sshd@7-10.200.4.7:22-10.200.16.10:47824.service: Deactivated successfully. Nov 1 01:02:52.822423 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 01:02:52.823196 systemd-logind[1430]: Session 10 logged out. Waiting for processes to exit. Nov 1 01:02:52.824062 systemd-logind[1430]: Removed session 10. Nov 1 01:02:57.916725 systemd[1]: Started sshd@8-10.200.4.7:22-10.200.16.10:47828.service. Nov 1 01:02:58.503468 sshd[3844]: Accepted publickey for core from 10.200.16.10 port 47828 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:02:58.504950 sshd[3844]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:02:58.510094 systemd[1]: Started session-11.scope. Nov 1 01:02:58.510743 systemd-logind[1430]: New session 11 of user core. Nov 1 01:02:58.980628 sshd[3844]: pam_unix(sshd:session): session closed for user core Nov 1 01:02:58.984951 systemd[1]: sshd@8-10.200.4.7:22-10.200.16.10:47828.service: Deactivated successfully. Nov 1 01:02:58.985856 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 01:02:58.986306 systemd-logind[1430]: Session 11 logged out. Waiting for processes to exit. Nov 1 01:02:58.987089 systemd-logind[1430]: Removed session 11. Nov 1 01:03:04.081756 systemd[1]: Started sshd@9-10.200.4.7:22-10.200.16.10:36288.service. Nov 1 01:03:04.677649 sshd[3857]: Accepted publickey for core from 10.200.16.10 port 36288 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:03:04.679372 sshd[3857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:04.684402 systemd[1]: Started session-12.scope. Nov 1 01:03:04.685318 systemd-logind[1430]: New session 12 of user core. Nov 1 01:03:05.176004 sshd[3857]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:05.178900 systemd[1]: sshd@9-10.200.4.7:22-10.200.16.10:36288.service: Deactivated successfully. Nov 1 01:03:05.179903 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 01:03:05.180618 systemd-logind[1430]: Session 12 logged out. Waiting for processes to exit. Nov 1 01:03:05.181458 systemd-logind[1430]: Removed session 12. Nov 1 01:03:10.277505 systemd[1]: Started sshd@10-10.200.4.7:22-10.200.16.10:40520.service. Nov 1 01:03:10.868329 sshd[3870]: Accepted publickey for core from 10.200.16.10 port 40520 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:03:10.869758 sshd[3870]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:10.874867 systemd[1]: Started session-13.scope. Nov 1 01:03:10.875694 systemd-logind[1430]: New session 13 of user core. Nov 1 01:03:11.366122 sshd[3870]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:11.369626 systemd[1]: sshd@10-10.200.4.7:22-10.200.16.10:40520.service: Deactivated successfully. Nov 1 01:03:11.370808 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 01:03:11.371947 systemd-logind[1430]: Session 13 logged out. Waiting for processes to exit. Nov 1 01:03:11.373027 systemd-logind[1430]: Removed session 13. Nov 1 01:03:16.467659 systemd[1]: Started sshd@11-10.200.4.7:22-10.200.16.10:40530.service. Nov 1 01:03:17.065894 sshd[3885]: Accepted publickey for core from 10.200.16.10 port 40530 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:03:17.068067 sshd[3885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:17.073114 systemd-logind[1430]: New session 14 of user core. Nov 1 01:03:17.073672 systemd[1]: Started session-14.scope. Nov 1 01:03:17.561634 sshd[3885]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:17.564997 systemd[1]: sshd@11-10.200.4.7:22-10.200.16.10:40530.service: Deactivated successfully. Nov 1 01:03:17.565886 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 01:03:17.566293 systemd-logind[1430]: Session 14 logged out. Waiting for processes to exit. Nov 1 01:03:17.567304 systemd-logind[1430]: Removed session 14. Nov 1 01:03:17.662987 systemd[1]: Started sshd@12-10.200.4.7:22-10.200.16.10:40534.service. Nov 1 01:03:18.253713 sshd[3898]: Accepted publickey for core from 10.200.16.10 port 40534 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:03:18.255460 sshd[3898]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:18.260548 systemd[1]: Started session-15.scope. Nov 1 01:03:18.261181 systemd-logind[1430]: New session 15 of user core. Nov 1 01:03:18.787453 sshd[3898]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:18.791391 systemd[1]: sshd@12-10.200.4.7:22-10.200.16.10:40534.service: Deactivated successfully. Nov 1 01:03:18.792380 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 01:03:18.793320 systemd-logind[1430]: Session 15 logged out. Waiting for processes to exit. Nov 1 01:03:18.794097 systemd-logind[1430]: Removed session 15. Nov 1 01:03:18.886714 systemd[1]: Started sshd@13-10.200.4.7:22-10.200.16.10:40542.service. Nov 1 01:03:19.481258 sshd[3907]: Accepted publickey for core from 10.200.16.10 port 40542 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:03:19.485072 sshd[3907]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:19.494015 systemd-logind[1430]: New session 16 of user core. Nov 1 01:03:19.494411 systemd[1]: Started session-16.scope. Nov 1 01:03:19.972477 sshd[3907]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:19.975858 systemd[1]: sshd@13-10.200.4.7:22-10.200.16.10:40542.service: Deactivated successfully. Nov 1 01:03:19.976923 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 01:03:19.977642 systemd-logind[1430]: Session 16 logged out. Waiting for processes to exit. Nov 1 01:03:19.978486 systemd-logind[1430]: Removed session 16. Nov 1 01:03:25.075883 systemd[1]: Started sshd@14-10.200.4.7:22-10.200.16.10:57118.service. Nov 1 01:03:25.672478 sshd[3919]: Accepted publickey for core from 10.200.16.10 port 57118 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:03:25.674117 sshd[3919]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:25.679308 systemd-logind[1430]: New session 17 of user core. Nov 1 01:03:25.679398 systemd[1]: Started session-17.scope. Nov 1 01:03:26.158641 sshd[3919]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:26.161924 systemd[1]: sshd@14-10.200.4.7:22-10.200.16.10:57118.service: Deactivated successfully. Nov 1 01:03:26.163025 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 01:03:26.163742 systemd-logind[1430]: Session 17 logged out. Waiting for processes to exit. Nov 1 01:03:26.164680 systemd-logind[1430]: Removed session 17. Nov 1 01:03:26.258056 systemd[1]: Started sshd@15-10.200.4.7:22-10.200.16.10:57126.service. Nov 1 01:03:26.846227 sshd[3931]: Accepted publickey for core from 10.200.16.10 port 57126 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:03:26.847960 sshd[3931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:26.853832 systemd-logind[1430]: New session 18 of user core. Nov 1 01:03:26.854359 systemd[1]: Started session-18.scope. Nov 1 01:03:27.435315 sshd[3931]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:27.438829 systemd[1]: sshd@15-10.200.4.7:22-10.200.16.10:57126.service: Deactivated successfully. Nov 1 01:03:27.439988 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 01:03:27.440865 systemd-logind[1430]: Session 18 logged out. Waiting for processes to exit. Nov 1 01:03:27.441785 systemd-logind[1430]: Removed session 18. Nov 1 01:03:27.535546 systemd[1]: Started sshd@16-10.200.4.7:22-10.200.16.10:57128.service. Nov 1 01:03:28.124499 sshd[3940]: Accepted publickey for core from 10.200.16.10 port 57128 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:03:28.126089 sshd[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:28.131501 systemd-logind[1430]: New session 19 of user core. Nov 1 01:03:28.132068 systemd[1]: Started session-19.scope. Nov 1 01:03:29.112516 sshd[3940]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:29.115519 systemd[1]: sshd@16-10.200.4.7:22-10.200.16.10:57128.service: Deactivated successfully. Nov 1 01:03:29.116418 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 01:03:29.117143 systemd-logind[1430]: Session 19 logged out. Waiting for processes to exit. Nov 1 01:03:29.118015 systemd-logind[1430]: Removed session 19. Nov 1 01:03:29.212276 systemd[1]: Started sshd@17-10.200.4.7:22-10.200.16.10:57136.service. Nov 1 01:03:29.811343 sshd[3955]: Accepted publickey for core from 10.200.16.10 port 57136 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:03:29.812786 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:29.817850 systemd-logind[1430]: New session 20 of user core. Nov 1 01:03:29.818363 systemd[1]: Started session-20.scope. Nov 1 01:03:30.405740 sshd[3955]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:30.409359 systemd[1]: sshd@17-10.200.4.7:22-10.200.16.10:57136.service: Deactivated successfully. Nov 1 01:03:30.410571 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 01:03:30.411684 systemd-logind[1430]: Session 20 logged out. Waiting for processes to exit. Nov 1 01:03:30.414665 systemd-logind[1430]: Removed session 20. Nov 1 01:03:30.506039 systemd[1]: Started sshd@18-10.200.4.7:22-10.200.16.10:47380.service. Nov 1 01:03:31.097600 sshd[3967]: Accepted publickey for core from 10.200.16.10 port 47380 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:03:31.098188 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:31.103265 systemd-logind[1430]: New session 21 of user core. Nov 1 01:03:31.104754 systemd[1]: Started session-21.scope. Nov 1 01:03:31.586775 sshd[3967]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:31.589748 systemd[1]: sshd@18-10.200.4.7:22-10.200.16.10:47380.service: Deactivated successfully. Nov 1 01:03:31.590756 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 01:03:31.591517 systemd-logind[1430]: Session 21 logged out. Waiting for processes to exit. Nov 1 01:03:31.592399 systemd-logind[1430]: Removed session 21. Nov 1 01:03:36.688305 systemd[1]: Started sshd@19-10.200.4.7:22-10.200.16.10:47384.service. Nov 1 01:03:37.278970 sshd[3981]: Accepted publickey for core from 10.200.16.10 port 47384 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:03:37.280697 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:37.285784 systemd[1]: Started session-22.scope. Nov 1 01:03:37.287439 systemd-logind[1430]: New session 22 of user core. Nov 1 01:03:37.768392 sshd[3981]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:37.771447 systemd[1]: sshd@19-10.200.4.7:22-10.200.16.10:47384.service: Deactivated successfully. Nov 1 01:03:37.772454 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 01:03:37.773293 systemd-logind[1430]: Session 22 logged out. Waiting for processes to exit. Nov 1 01:03:37.774209 systemd-logind[1430]: Removed session 22. Nov 1 01:03:42.870270 systemd[1]: Started sshd@20-10.200.4.7:22-10.200.16.10:40646.service. Nov 1 01:03:43.463072 sshd[3996]: Accepted publickey for core from 10.200.16.10 port 40646 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:03:43.464791 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:43.469981 systemd[1]: Started session-23.scope. Nov 1 01:03:43.470660 systemd-logind[1430]: New session 23 of user core. Nov 1 01:03:43.943310 sshd[3996]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:43.946661 systemd[1]: sshd@20-10.200.4.7:22-10.200.16.10:40646.service: Deactivated successfully. Nov 1 01:03:43.947659 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 01:03:43.948385 systemd-logind[1430]: Session 23 logged out. Waiting for processes to exit. Nov 1 01:03:43.949225 systemd-logind[1430]: Removed session 23. Nov 1 01:03:49.044867 systemd[1]: Started sshd@21-10.200.4.7:22-10.200.16.10:40648.service. Nov 1 01:03:49.641139 sshd[4009]: Accepted publickey for core from 10.200.16.10 port 40648 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:03:49.642664 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:49.649144 systemd[1]: Started session-24.scope. Nov 1 01:03:49.650301 systemd-logind[1430]: New session 24 of user core. Nov 1 01:03:50.135548 sshd[4009]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:50.138591 systemd[1]: sshd@21-10.200.4.7:22-10.200.16.10:40648.service: Deactivated successfully. Nov 1 01:03:50.139538 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 01:03:50.140297 systemd-logind[1430]: Session 24 logged out. Waiting for processes to exit. Nov 1 01:03:50.141105 systemd-logind[1430]: Removed session 24. Nov 1 01:03:55.241713 systemd[1]: Started sshd@22-10.200.4.7:22-10.200.16.10:38502.service. Nov 1 01:03:55.833132 sshd[4024]: Accepted publickey for core from 10.200.16.10 port 38502 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:03:55.835039 sshd[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:55.840930 systemd[1]: Started session-25.scope. Nov 1 01:03:55.841606 systemd-logind[1430]: New session 25 of user core. Nov 1 01:03:56.320536 sshd[4024]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:56.324847 systemd[1]: sshd@22-10.200.4.7:22-10.200.16.10:38502.service: Deactivated successfully. Nov 1 01:03:56.325911 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 01:03:56.326858 systemd-logind[1430]: Session 25 logged out. Waiting for processes to exit. Nov 1 01:03:56.327865 systemd-logind[1430]: Removed session 25. Nov 1 01:04:01.424544 systemd[1]: Started sshd@23-10.200.4.7:22-10.200.16.10:37616.service. Nov 1 01:04:02.024643 sshd[4038]: Accepted publickey for core from 10.200.16.10 port 37616 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:04:02.026071 sshd[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:04:02.032089 systemd[1]: Started session-26.scope. Nov 1 01:04:02.033056 systemd-logind[1430]: New session 26 of user core. Nov 1 01:04:02.520699 sshd[4038]: pam_unix(sshd:session): session closed for user core Nov 1 01:04:02.523899 systemd[1]: sshd@23-10.200.4.7:22-10.200.16.10:37616.service: Deactivated successfully. Nov 1 01:04:02.525011 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 01:04:02.525934 systemd-logind[1430]: Session 26 logged out. Waiting for processes to exit. Nov 1 01:04:02.526936 systemd-logind[1430]: Removed session 26. Nov 1 01:04:07.619517 systemd[1]: Started sshd@24-10.200.4.7:22-10.200.16.10:37626.service. Nov 1 01:04:08.205998 sshd[4050]: Accepted publickey for core from 10.200.16.10 port 37626 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:04:08.207710 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:04:08.212868 systemd[1]: Started session-27.scope. Nov 1 01:04:08.213561 systemd-logind[1430]: New session 27 of user core. Nov 1 01:04:08.685114 sshd[4050]: pam_unix(sshd:session): session closed for user core Nov 1 01:04:08.688788 systemd[1]: sshd@24-10.200.4.7:22-10.200.16.10:37626.service: Deactivated successfully. Nov 1 01:04:08.689710 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 01:04:08.690490 systemd-logind[1430]: Session 27 logged out. Waiting for processes to exit. Nov 1 01:04:08.691275 systemd-logind[1430]: Removed session 27. Nov 1 01:04:08.786561 systemd[1]: Started sshd@25-10.200.4.7:22-10.200.16.10:37636.service. Nov 1 01:04:09.379011 sshd[4065]: Accepted publickey for core from 10.200.16.10 port 37636 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:04:09.380687 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:04:09.385219 systemd-logind[1430]: New session 28 of user core. Nov 1 01:04:09.385910 systemd[1]: Started session-28.scope. Nov 1 01:04:11.034951 env[1442]: time="2025-11-01T01:04:11.030076888Z" level=info msg="StopContainer for \"3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866\" with timeout 30 (s)" Nov 1 01:04:11.034951 env[1442]: time="2025-11-01T01:04:11.030507894Z" level=info msg="Stop container \"3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866\" with signal terminated" Nov 1 01:04:11.049310 systemd[1]: run-containerd-runc-k8s.io-067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2-runc.vkKukv.mount: Deactivated successfully. Nov 1 01:04:11.092420 env[1442]: time="2025-11-01T01:04:11.092340530Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 01:04:11.097543 systemd[1]: cri-containerd-3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866.scope: Deactivated successfully. Nov 1 01:04:11.104684 env[1442]: time="2025-11-01T01:04:11.104630376Z" level=info msg="StopContainer for \"067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2\" with timeout 2 (s)" Nov 1 01:04:11.104997 env[1442]: time="2025-11-01T01:04:11.104948180Z" level=info msg="Stop container \"067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2\" with signal terminated" Nov 1 01:04:11.114755 systemd-networkd[1587]: lxc_health: Link DOWN Nov 1 01:04:11.114764 systemd-networkd[1587]: lxc_health: Lost carrier Nov 1 01:04:11.130871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866-rootfs.mount: Deactivated successfully. Nov 1 01:04:11.138605 systemd[1]: cri-containerd-067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2.scope: Deactivated successfully. Nov 1 01:04:11.138911 systemd[1]: cri-containerd-067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2.scope: Consumed 7.242s CPU time. Nov 1 01:04:11.161509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2-rootfs.mount: Deactivated successfully. Nov 1 01:04:11.173619 env[1442]: time="2025-11-01T01:04:11.173576297Z" level=info msg="shim disconnected" id=3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866 Nov 1 01:04:11.173864 env[1442]: time="2025-11-01T01:04:11.173845300Z" level=warning msg="cleaning up after shim disconnected" id=3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866 namespace=k8s.io Nov 1 01:04:11.173951 env[1442]: time="2025-11-01T01:04:11.173939601Z" level=info msg="cleaning up dead shim" Nov 1 01:04:11.175060 env[1442]: time="2025-11-01T01:04:11.175023714Z" level=info msg="shim disconnected" id=067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2 Nov 1 01:04:11.175223 env[1442]: time="2025-11-01T01:04:11.175207717Z" level=warning msg="cleaning up after shim disconnected" id=067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2 namespace=k8s.io Nov 1 01:04:11.175325 env[1442]: time="2025-11-01T01:04:11.175312818Z" level=info msg="cleaning up dead shim" Nov 1 01:04:11.187143 env[1442]: time="2025-11-01T01:04:11.187103958Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4132 runtime=io.containerd.runc.v2\n" Nov 1 01:04:11.187958 env[1442]: time="2025-11-01T01:04:11.187558464Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4131 runtime=io.containerd.runc.v2\n" Nov 1 01:04:11.192653 env[1442]: time="2025-11-01T01:04:11.192625324Z" level=info msg="StopContainer for \"067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2\" returns successfully" Nov 1 01:04:11.193459 env[1442]: time="2025-11-01T01:04:11.193432034Z" level=info msg="StopPodSandbox for \"d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a\"" Nov 1 01:04:11.193558 env[1442]: time="2025-11-01T01:04:11.193497634Z" level=info msg="Container to stop \"1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 01:04:11.193558 env[1442]: time="2025-11-01T01:04:11.193517635Z" level=info msg="Container to stop \"067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 01:04:11.193558 env[1442]: time="2025-11-01T01:04:11.193534235Z" level=info msg="Container to stop \"64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 01:04:11.193558 env[1442]: time="2025-11-01T01:04:11.193549235Z" level=info msg="Container to stop \"487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 01:04:11.193817 env[1442]: time="2025-11-01T01:04:11.193564235Z" level=info msg="Container to stop \"324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 01:04:11.195453 env[1442]: time="2025-11-01T01:04:11.195419057Z" level=info msg="StopContainer for \"3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866\" returns successfully" Nov 1 01:04:11.195996 env[1442]: time="2025-11-01T01:04:11.195970364Z" level=info msg="StopPodSandbox for \"55b6ace6ba438f3c8e87fe1c332666f9e4f0a3736acfbc91900bf90fdbbf56e8\"" Nov 1 01:04:11.196171 env[1442]: time="2025-11-01T01:04:11.196136366Z" level=info msg="Container to stop \"3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 01:04:11.201461 systemd[1]: cri-containerd-d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a.scope: Deactivated successfully. Nov 1 01:04:11.207991 systemd[1]: cri-containerd-55b6ace6ba438f3c8e87fe1c332666f9e4f0a3736acfbc91900bf90fdbbf56e8.scope: Deactivated successfully. Nov 1 01:04:11.240755 env[1442]: time="2025-11-01T01:04:11.240701896Z" level=info msg="shim disconnected" id=d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a Nov 1 01:04:11.240755 env[1442]: time="2025-11-01T01:04:11.240754497Z" level=warning msg="cleaning up after shim disconnected" id=d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a namespace=k8s.io Nov 1 01:04:11.241092 env[1442]: time="2025-11-01T01:04:11.240767497Z" level=info msg="cleaning up dead shim" Nov 1 01:04:11.241668 env[1442]: time="2025-11-01T01:04:11.241607807Z" level=info msg="shim disconnected" id=55b6ace6ba438f3c8e87fe1c332666f9e4f0a3736acfbc91900bf90fdbbf56e8 Nov 1 01:04:11.241668 env[1442]: time="2025-11-01T01:04:11.241651408Z" level=warning msg="cleaning up after shim disconnected" id=55b6ace6ba438f3c8e87fe1c332666f9e4f0a3736acfbc91900bf90fdbbf56e8 namespace=k8s.io Nov 1 01:04:11.241668 env[1442]: time="2025-11-01T01:04:11.241663308Z" level=info msg="cleaning up dead shim" Nov 1 01:04:11.251564 env[1442]: time="2025-11-01T01:04:11.251512425Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4198 runtime=io.containerd.runc.v2\n" Nov 1 01:04:11.252160 env[1442]: time="2025-11-01T01:04:11.252120032Z" level=info msg="TearDown network for sandbox \"d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a\" successfully" Nov 1 01:04:11.252323 env[1442]: time="2025-11-01T01:04:11.252299635Z" level=info msg="StopPodSandbox for \"d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a\" returns successfully" Nov 1 01:04:11.256721 env[1442]: time="2025-11-01T01:04:11.256563785Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4199 runtime=io.containerd.runc.v2\n" Nov 1 01:04:11.256917 env[1442]: time="2025-11-01T01:04:11.256885389Z" level=info msg="TearDown network for sandbox \"55b6ace6ba438f3c8e87fe1c332666f9e4f0a3736acfbc91900bf90fdbbf56e8\" successfully" Nov 1 01:04:11.257043 env[1442]: time="2025-11-01T01:04:11.256919090Z" level=info msg="StopPodSandbox for \"55b6ace6ba438f3c8e87fe1c332666f9e4f0a3736acfbc91900bf90fdbbf56e8\" returns successfully" Nov 1 01:04:11.355972 kubelet[2455]: I1101 01:04:11.354457 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmccj\" (UniqueName: \"kubernetes.io/projected/3d9f87cf-6653-4339-85c8-53ca43ebee6b-kube-api-access-fmccj\") pod \"3d9f87cf-6653-4339-85c8-53ca43ebee6b\" (UID: \"3d9f87cf-6653-4339-85c8-53ca43ebee6b\") " Nov 1 01:04:11.356588 kubelet[2455]: I1101 01:04:11.356553 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-hostproc\") pod \"513493cc-dc29-4d19-b933-8f7df774d51b\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " Nov 1 01:04:11.356745 kubelet[2455]: I1101 01:04:11.356728 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-cni-path\") pod \"513493cc-dc29-4d19-b933-8f7df774d51b\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " Nov 1 01:04:11.356872 kubelet[2455]: I1101 01:04:11.356859 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-cilium-run\") pod \"513493cc-dc29-4d19-b933-8f7df774d51b\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " Nov 1 01:04:11.357020 kubelet[2455]: I1101 01:04:11.357008 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-etc-cni-netd\") pod \"513493cc-dc29-4d19-b933-8f7df774d51b\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " Nov 1 01:04:11.357152 kubelet[2455]: I1101 01:04:11.357141 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/513493cc-dc29-4d19-b933-8f7df774d51b-hubble-tls\") pod \"513493cc-dc29-4d19-b933-8f7df774d51b\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " Nov 1 01:04:11.357576 kubelet[2455]: I1101 01:04:11.357557 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/513493cc-dc29-4d19-b933-8f7df774d51b-clustermesh-secrets\") pod \"513493cc-dc29-4d19-b933-8f7df774d51b\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " Nov 1 01:04:11.357719 kubelet[2455]: I1101 01:04:11.357704 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-xtables-lock\") pod \"513493cc-dc29-4d19-b933-8f7df774d51b\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " Nov 1 01:04:11.357840 kubelet[2455]: I1101 01:04:11.357819 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd2m7\" (UniqueName: \"kubernetes.io/projected/513493cc-dc29-4d19-b933-8f7df774d51b-kube-api-access-rd2m7\") pod \"513493cc-dc29-4d19-b933-8f7df774d51b\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " Nov 1 01:04:11.357952 kubelet[2455]: I1101 01:04:11.357939 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-host-proc-sys-net\") pod \"513493cc-dc29-4d19-b933-8f7df774d51b\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " Nov 1 01:04:11.358057 kubelet[2455]: I1101 01:04:11.358042 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-lib-modules\") pod \"513493cc-dc29-4d19-b933-8f7df774d51b\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " Nov 1 01:04:11.358160 kubelet[2455]: I1101 01:04:11.358144 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-bpf-maps\") pod \"513493cc-dc29-4d19-b933-8f7df774d51b\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " Nov 1 01:04:11.358279 kubelet[2455]: I1101 01:04:11.358265 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-cilium-cgroup\") pod \"513493cc-dc29-4d19-b933-8f7df774d51b\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " Nov 1 01:04:11.358389 kubelet[2455]: I1101 01:04:11.358377 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d9f87cf-6653-4339-85c8-53ca43ebee6b-cilium-config-path\") pod \"3d9f87cf-6653-4339-85c8-53ca43ebee6b\" (UID: \"3d9f87cf-6653-4339-85c8-53ca43ebee6b\") " Nov 1 01:04:11.360905 kubelet[2455]: I1101 01:04:11.356784 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-hostproc" (OuterVolumeSpecName: "hostproc") pod "513493cc-dc29-4d19-b933-8f7df774d51b" (UID: "513493cc-dc29-4d19-b933-8f7df774d51b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:11.361075 kubelet[2455]: I1101 01:04:11.356810 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-cni-path" (OuterVolumeSpecName: "cni-path") pod "513493cc-dc29-4d19-b933-8f7df774d51b" (UID: "513493cc-dc29-4d19-b933-8f7df774d51b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:11.361179 kubelet[2455]: I1101 01:04:11.356972 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "513493cc-dc29-4d19-b933-8f7df774d51b" (UID: "513493cc-dc29-4d19-b933-8f7df774d51b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:11.361285 kubelet[2455]: I1101 01:04:11.357103 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "513493cc-dc29-4d19-b933-8f7df774d51b" (UID: "513493cc-dc29-4d19-b933-8f7df774d51b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:11.361380 kubelet[2455]: I1101 01:04:11.360876 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d9f87cf-6653-4339-85c8-53ca43ebee6b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3d9f87cf-6653-4339-85c8-53ca43ebee6b" (UID: "3d9f87cf-6653-4339-85c8-53ca43ebee6b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 01:04:11.361482 kubelet[2455]: I1101 01:04:11.361266 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9f87cf-6653-4339-85c8-53ca43ebee6b-kube-api-access-fmccj" (OuterVolumeSpecName: "kube-api-access-fmccj") pod "3d9f87cf-6653-4339-85c8-53ca43ebee6b" (UID: "3d9f87cf-6653-4339-85c8-53ca43ebee6b"). InnerVolumeSpecName "kube-api-access-fmccj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:04:11.361577 kubelet[2455]: I1101 01:04:11.361296 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "513493cc-dc29-4d19-b933-8f7df774d51b" (UID: "513493cc-dc29-4d19-b933-8f7df774d51b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:11.361685 kubelet[2455]: I1101 01:04:11.361669 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "513493cc-dc29-4d19-b933-8f7df774d51b" (UID: "513493cc-dc29-4d19-b933-8f7df774d51b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:11.361799 kubelet[2455]: I1101 01:04:11.361784 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "513493cc-dc29-4d19-b933-8f7df774d51b" (UID: "513493cc-dc29-4d19-b933-8f7df774d51b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:11.361903 kubelet[2455]: I1101 01:04:11.361889 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "513493cc-dc29-4d19-b933-8f7df774d51b" (UID: "513493cc-dc29-4d19-b933-8f7df774d51b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:11.362075 kubelet[2455]: I1101 01:04:11.362059 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "513493cc-dc29-4d19-b933-8f7df774d51b" (UID: "513493cc-dc29-4d19-b933-8f7df774d51b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:11.362947 kubelet[2455]: I1101 01:04:11.362919 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/513493cc-dc29-4d19-b933-8f7df774d51b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "513493cc-dc29-4d19-b933-8f7df774d51b" (UID: "513493cc-dc29-4d19-b933-8f7df774d51b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:04:11.365855 kubelet[2455]: I1101 01:04:11.365826 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/513493cc-dc29-4d19-b933-8f7df774d51b-kube-api-access-rd2m7" (OuterVolumeSpecName: "kube-api-access-rd2m7") pod "513493cc-dc29-4d19-b933-8f7df774d51b" (UID: "513493cc-dc29-4d19-b933-8f7df774d51b"). InnerVolumeSpecName "kube-api-access-rd2m7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:04:11.365979 kubelet[2455]: I1101 01:04:11.365954 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/513493cc-dc29-4d19-b933-8f7df774d51b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "513493cc-dc29-4d19-b933-8f7df774d51b" (UID: "513493cc-dc29-4d19-b933-8f7df774d51b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 01:04:11.459421 kubelet[2455]: I1101 01:04:11.459363 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-host-proc-sys-kernel\") pod \"513493cc-dc29-4d19-b933-8f7df774d51b\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " Nov 1 01:04:11.459654 kubelet[2455]: I1101 01:04:11.459439 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/513493cc-dc29-4d19-b933-8f7df774d51b-cilium-config-path\") pod \"513493cc-dc29-4d19-b933-8f7df774d51b\" (UID: \"513493cc-dc29-4d19-b933-8f7df774d51b\") " Nov 1 01:04:11.459654 kubelet[2455]: I1101 01:04:11.459497 2455 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-bpf-maps\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:11.459654 kubelet[2455]: I1101 01:04:11.459512 2455 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-cilium-cgroup\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:11.459654 kubelet[2455]: I1101 01:04:11.459527 2455 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d9f87cf-6653-4339-85c8-53ca43ebee6b-cilium-config-path\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:11.459654 kubelet[2455]: I1101 01:04:11.459541 2455 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fmccj\" (UniqueName: \"kubernetes.io/projected/3d9f87cf-6653-4339-85c8-53ca43ebee6b-kube-api-access-fmccj\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:11.459654 kubelet[2455]: I1101 01:04:11.459556 2455 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-hostproc\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:11.459654 kubelet[2455]: I1101 01:04:11.459568 2455 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-cni-path\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:11.460320 kubelet[2455]: I1101 01:04:11.459580 2455 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-cilium-run\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:11.460320 kubelet[2455]: I1101 01:04:11.459593 2455 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-etc-cni-netd\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:11.460320 kubelet[2455]: I1101 01:04:11.459607 2455 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/513493cc-dc29-4d19-b933-8f7df774d51b-hubble-tls\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:11.460320 kubelet[2455]: I1101 01:04:11.459620 2455 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/513493cc-dc29-4d19-b933-8f7df774d51b-clustermesh-secrets\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:11.460320 kubelet[2455]: I1101 01:04:11.459633 2455 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-xtables-lock\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:11.460320 kubelet[2455]: I1101 01:04:11.459646 2455 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rd2m7\" (UniqueName: \"kubernetes.io/projected/513493cc-dc29-4d19-b933-8f7df774d51b-kube-api-access-rd2m7\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:11.460320 kubelet[2455]: I1101 01:04:11.459661 2455 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-host-proc-sys-net\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:11.460320 kubelet[2455]: I1101 01:04:11.459679 2455 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-lib-modules\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:11.460892 kubelet[2455]: I1101 01:04:11.460858 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "513493cc-dc29-4d19-b933-8f7df774d51b" (UID: "513493cc-dc29-4d19-b933-8f7df774d51b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:11.462525 kubelet[2455]: I1101 01:04:11.462491 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513493cc-dc29-4d19-b933-8f7df774d51b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "513493cc-dc29-4d19-b933-8f7df774d51b" (UID: "513493cc-dc29-4d19-b933-8f7df774d51b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 01:04:11.508170 systemd[1]: Removed slice kubepods-burstable-pod513493cc_dc29_4d19_b933_8f7df774d51b.slice. Nov 1 01:04:11.508323 systemd[1]: kubepods-burstable-pod513493cc_dc29_4d19_b933_8f7df774d51b.slice: Consumed 7.349s CPU time. Nov 1 01:04:11.511704 systemd[1]: Removed slice kubepods-besteffort-pod3d9f87cf_6653_4339_85c8_53ca43ebee6b.slice. Nov 1 01:04:11.560367 kubelet[2455]: I1101 01:04:11.560319 2455 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/513493cc-dc29-4d19-b933-8f7df774d51b-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:11.560367 kubelet[2455]: I1101 01:04:11.560364 2455 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/513493cc-dc29-4d19-b933-8f7df774d51b-cilium-config-path\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:12.023347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a-rootfs.mount: Deactivated successfully. Nov 1 01:04:12.023476 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d85f0501d827607ec2e0594017e56f93de9ecc9b94487a990a6112f7917d2b0a-shm.mount: Deactivated successfully. Nov 1 01:04:12.023557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55b6ace6ba438f3c8e87fe1c332666f9e4f0a3736acfbc91900bf90fdbbf56e8-rootfs.mount: Deactivated successfully. Nov 1 01:04:12.023634 systemd[1]: var-lib-kubelet-pods-513493cc\x2ddc29\x2d4d19\x2db933\x2d8f7df774d51b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drd2m7.mount: Deactivated successfully. Nov 1 01:04:12.023716 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55b6ace6ba438f3c8e87fe1c332666f9e4f0a3736acfbc91900bf90fdbbf56e8-shm.mount: Deactivated successfully. Nov 1 01:04:12.023803 systemd[1]: var-lib-kubelet-pods-3d9f87cf\x2d6653\x2d4339\x2d85c8\x2d53ca43ebee6b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfmccj.mount: Deactivated successfully. Nov 1 01:04:12.023886 systemd[1]: var-lib-kubelet-pods-513493cc\x2ddc29\x2d4d19\x2db933\x2d8f7df774d51b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 01:04:12.023969 systemd[1]: var-lib-kubelet-pods-513493cc\x2ddc29\x2d4d19\x2db933\x2d8f7df774d51b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 01:04:12.069021 kubelet[2455]: I1101 01:04:12.068975 2455 scope.go:117] "RemoveContainer" containerID="3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866" Nov 1 01:04:12.072901 env[1442]: time="2025-11-01T01:04:12.072451196Z" level=info msg="RemoveContainer for \"3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866\"" Nov 1 01:04:12.085200 env[1442]: time="2025-11-01T01:04:12.085154147Z" level=info msg="RemoveContainer for \"3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866\" returns successfully" Nov 1 01:04:12.085496 kubelet[2455]: I1101 01:04:12.085471 2455 scope.go:117] "RemoveContainer" containerID="3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866" Nov 1 01:04:12.085779 env[1442]: time="2025-11-01T01:04:12.085709153Z" level=error msg="ContainerStatus for \"3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866\": not found" Nov 1 01:04:12.085981 kubelet[2455]: E1101 01:04:12.085952 2455 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866\": not found" containerID="3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866" Nov 1 01:04:12.086118 kubelet[2455]: I1101 01:04:12.086078 2455 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866"} err="failed to get container status \"3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866\": rpc error: code = NotFound desc = an error occurred when try to find container \"3a6da54e968d39104117ce11f336cc761eba2a4916795abf1e8427940d8dc866\": not found" Nov 1 01:04:12.086118 kubelet[2455]: I1101 01:04:12.086115 2455 scope.go:117] "RemoveContainer" containerID="067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2" Nov 1 01:04:12.087302 env[1442]: time="2025-11-01T01:04:12.087270072Z" level=info msg="RemoveContainer for \"067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2\"" Nov 1 01:04:12.096195 env[1442]: time="2025-11-01T01:04:12.096146877Z" level=info msg="RemoveContainer for \"067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2\" returns successfully" Nov 1 01:04:12.096500 kubelet[2455]: I1101 01:04:12.096478 2455 scope.go:117] "RemoveContainer" containerID="1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17" Nov 1 01:04:12.098724 env[1442]: time="2025-11-01T01:04:12.098680607Z" level=info msg="RemoveContainer for \"1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17\"" Nov 1 01:04:12.107513 env[1442]: time="2025-11-01T01:04:12.107464911Z" level=info msg="RemoveContainer for \"1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17\" returns successfully" Nov 1 01:04:12.108349 kubelet[2455]: I1101 01:04:12.108319 2455 scope.go:117] "RemoveContainer" containerID="324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b" Nov 1 01:04:12.111177 env[1442]: time="2025-11-01T01:04:12.111071354Z" level=info msg="RemoveContainer for \"324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b\"" Nov 1 01:04:12.118383 env[1442]: time="2025-11-01T01:04:12.118343040Z" level=info msg="RemoveContainer for \"324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b\" returns successfully" Nov 1 01:04:12.118599 kubelet[2455]: I1101 01:04:12.118575 2455 scope.go:117] "RemoveContainer" containerID="487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204" Nov 1 01:04:12.119716 env[1442]: time="2025-11-01T01:04:12.119687456Z" level=info msg="RemoveContainer for \"487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204\"" Nov 1 01:04:12.127194 env[1442]: time="2025-11-01T01:04:12.127144844Z" level=info msg="RemoveContainer for \"487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204\" returns successfully" Nov 1 01:04:12.127423 kubelet[2455]: I1101 01:04:12.127396 2455 scope.go:117] "RemoveContainer" containerID="64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0" Nov 1 01:04:12.128602 env[1442]: time="2025-11-01T01:04:12.128566361Z" level=info msg="RemoveContainer for \"64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0\"" Nov 1 01:04:12.136962 env[1442]: time="2025-11-01T01:04:12.136917160Z" level=info msg="RemoveContainer for \"64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0\" returns successfully" Nov 1 01:04:12.137199 kubelet[2455]: I1101 01:04:12.137172 2455 scope.go:117] "RemoveContainer" containerID="067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2" Nov 1 01:04:12.137548 env[1442]: time="2025-11-01T01:04:12.137445566Z" level=error msg="ContainerStatus for \"067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2\": not found" Nov 1 01:04:12.137694 kubelet[2455]: E1101 01:04:12.137668 2455 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2\": not found" containerID="067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2" Nov 1 01:04:12.137774 kubelet[2455]: I1101 01:04:12.137707 2455 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2"} err="failed to get container status \"067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"067b2b9015d5fd594121f3277c90e7416e62ea3eded39da1300487bcaae299b2\": not found" Nov 1 01:04:12.137774 kubelet[2455]: I1101 01:04:12.137736 2455 scope.go:117] "RemoveContainer" containerID="1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17" Nov 1 01:04:12.138055 env[1442]: time="2025-11-01T01:04:12.137959972Z" level=error msg="ContainerStatus for \"1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17\": not found" Nov 1 01:04:12.138132 kubelet[2455]: E1101 01:04:12.138113 2455 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17\": not found" containerID="1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17" Nov 1 01:04:12.138190 kubelet[2455]: I1101 01:04:12.138142 2455 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17"} err="failed to get container status \"1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d50455a0b944398fe190c6ee7006242bc6e97cc490a19db1a621754d499db17\": not found" Nov 1 01:04:12.138190 kubelet[2455]: I1101 01:04:12.138163 2455 scope.go:117] "RemoveContainer" containerID="324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b" Nov 1 01:04:12.138487 env[1442]: time="2025-11-01T01:04:12.138420778Z" level=error msg="ContainerStatus for \"324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b\": not found" Nov 1 01:04:12.138620 kubelet[2455]: E1101 01:04:12.138572 2455 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b\": not found" containerID="324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b" Nov 1 01:04:12.138620 kubelet[2455]: I1101 01:04:12.138599 2455 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b"} err="failed to get container status \"324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b\": rpc error: code = NotFound desc = an error occurred when try to find container \"324e84925412cd5b4a679d9a542adbbef10aa931b957c07344017e767af8973b\": not found" Nov 1 01:04:12.138620 kubelet[2455]: I1101 01:04:12.138617 2455 scope.go:117] "RemoveContainer" containerID="487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204" Nov 1 01:04:12.138914 env[1442]: time="2025-11-01T01:04:12.138871583Z" level=error msg="ContainerStatus for \"487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204\": not found" Nov 1 01:04:12.139080 kubelet[2455]: E1101 01:04:12.139053 2455 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204\": not found" containerID="487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204" Nov 1 01:04:12.139153 kubelet[2455]: I1101 01:04:12.139078 2455 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204"} err="failed to get container status \"487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204\": rpc error: code = NotFound desc = an error occurred when try to find container \"487f4bfe9f4aeb79da6d7d9147ccd02196f97d7882428481bb958554126db204\": not found" Nov 1 01:04:12.139153 kubelet[2455]: I1101 01:04:12.139096 2455 scope.go:117] "RemoveContainer" containerID="64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0" Nov 1 01:04:12.139506 env[1442]: time="2025-11-01T01:04:12.139405090Z" level=error msg="ContainerStatus for \"64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0\": not found" Nov 1 01:04:12.139612 kubelet[2455]: E1101 01:04:12.139565 2455 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0\": not found" containerID="64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0" Nov 1 01:04:12.139612 kubelet[2455]: I1101 01:04:12.139589 2455 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0"} err="failed to get container status \"64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0\": rpc error: code = NotFound desc = an error occurred when try to find container \"64dffd83ae11fdf80993bb387a123e2c2ac5c017faedec08bdf037bf8ffb5fa0\": not found" Nov 1 01:04:13.051970 sshd[4065]: pam_unix(sshd:session): session closed for user core Nov 1 01:04:13.055162 systemd[1]: sshd@25-10.200.4.7:22-10.200.16.10:37636.service: Deactivated successfully. Nov 1 01:04:13.056108 systemd[1]: session-28.scope: Deactivated successfully. Nov 1 01:04:13.056819 systemd-logind[1430]: Session 28 logged out. Waiting for processes to exit. Nov 1 01:04:13.057712 systemd-logind[1430]: Removed session 28. Nov 1 01:04:13.151884 systemd[1]: Started sshd@26-10.200.4.7:22-10.200.16.10:53980.service. Nov 1 01:04:13.504134 kubelet[2455]: I1101 01:04:13.504084 2455 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9f87cf-6653-4339-85c8-53ca43ebee6b" path="/var/lib/kubelet/pods/3d9f87cf-6653-4339-85c8-53ca43ebee6b/volumes" Nov 1 01:04:13.504841 kubelet[2455]: I1101 01:04:13.504809 2455 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="513493cc-dc29-4d19-b933-8f7df774d51b" path="/var/lib/kubelet/pods/513493cc-dc29-4d19-b933-8f7df774d51b/volumes" Nov 1 01:04:13.743345 sshd[4233]: Accepted publickey for core from 10.200.16.10 port 53980 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:04:13.745126 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:04:13.751003 systemd-logind[1430]: New session 29 of user core. Nov 1 01:04:13.751637 systemd[1]: Started session-29.scope. Nov 1 01:04:14.551418 systemd[1]: Created slice kubepods-burstable-poda415738a_def1_4c21_8de7_b70fc582fcec.slice. Nov 1 01:04:14.583225 kubelet[2455]: I1101 01:04:14.583181 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a415738a-def1-4c21-8de7-b70fc582fcec-clustermesh-secrets\") pod \"cilium-ft26f\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " pod="kube-system/cilium-ft26f" Nov 1 01:04:14.583850 kubelet[2455]: I1101 01:04:14.583798 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a415738a-def1-4c21-8de7-b70fc582fcec-cilium-ipsec-secrets\") pod \"cilium-ft26f\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " pod="kube-system/cilium-ft26f" Nov 1 01:04:14.584025 kubelet[2455]: I1101 01:04:14.584006 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-lib-modules\") pod \"cilium-ft26f\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " pod="kube-system/cilium-ft26f" Nov 1 01:04:14.584127 kubelet[2455]: I1101 01:04:14.584111 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-host-proc-sys-net\") pod \"cilium-ft26f\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " pod="kube-system/cilium-ft26f" Nov 1 01:04:14.584256 kubelet[2455]: I1101 01:04:14.584223 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-hostproc\") pod \"cilium-ft26f\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " pod="kube-system/cilium-ft26f" Nov 1 01:04:14.584407 kubelet[2455]: I1101 01:04:14.584389 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a415738a-def1-4c21-8de7-b70fc582fcec-hubble-tls\") pod \"cilium-ft26f\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " pod="kube-system/cilium-ft26f" Nov 1 01:04:14.584506 kubelet[2455]: I1101 01:04:14.584492 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-cilium-run\") pod \"cilium-ft26f\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " pod="kube-system/cilium-ft26f" Nov 1 01:04:14.584604 kubelet[2455]: I1101 01:04:14.584588 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-bpf-maps\") pod \"cilium-ft26f\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " pod="kube-system/cilium-ft26f" Nov 1 01:04:14.584690 kubelet[2455]: I1101 01:04:14.584677 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-cilium-cgroup\") pod \"cilium-ft26f\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " pod="kube-system/cilium-ft26f" Nov 1 01:04:14.584774 kubelet[2455]: I1101 01:04:14.584761 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-etc-cni-netd\") pod \"cilium-ft26f\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " pod="kube-system/cilium-ft26f" Nov 1 01:04:14.584857 kubelet[2455]: I1101 01:04:14.584843 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-xtables-lock\") pod \"cilium-ft26f\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " pod="kube-system/cilium-ft26f" Nov 1 01:04:14.584953 kubelet[2455]: I1101 01:04:14.584939 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a415738a-def1-4c21-8de7-b70fc582fcec-cilium-config-path\") pod \"cilium-ft26f\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " pod="kube-system/cilium-ft26f" Nov 1 01:04:14.585049 kubelet[2455]: I1101 01:04:14.585033 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96dbd\" (UniqueName: \"kubernetes.io/projected/a415738a-def1-4c21-8de7-b70fc582fcec-kube-api-access-96dbd\") pod \"cilium-ft26f\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " pod="kube-system/cilium-ft26f" Nov 1 01:04:14.585156 kubelet[2455]: I1101 01:04:14.585140 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-cni-path\") pod \"cilium-ft26f\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " pod="kube-system/cilium-ft26f" Nov 1 01:04:14.585265 kubelet[2455]: I1101 01:04:14.585250 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-host-proc-sys-kernel\") pod \"cilium-ft26f\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " pod="kube-system/cilium-ft26f" Nov 1 01:04:14.606298 sshd[4233]: pam_unix(sshd:session): session closed for user core Nov 1 01:04:14.609842 systemd-logind[1430]: Session 29 logged out. Waiting for processes to exit. Nov 1 01:04:14.610324 systemd[1]: sshd@26-10.200.4.7:22-10.200.16.10:53980.service: Deactivated successfully. Nov 1 01:04:14.611308 systemd[1]: session-29.scope: Deactivated successfully. Nov 1 01:04:14.612775 systemd-logind[1430]: Removed session 29. Nov 1 01:04:14.706410 systemd[1]: Started sshd@27-10.200.4.7:22-10.200.16.10:53990.service. Nov 1 01:04:14.862291 env[1442]: time="2025-11-01T01:04:14.861427073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ft26f,Uid:a415738a-def1-4c21-8de7-b70fc582fcec,Namespace:kube-system,Attempt:0,}" Nov 1 01:04:14.899419 env[1442]: time="2025-11-01T01:04:14.899349917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:04:14.899419 env[1442]: time="2025-11-01T01:04:14.899386118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:04:14.899645 env[1442]: time="2025-11-01T01:04:14.899400318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:04:14.899888 env[1442]: time="2025-11-01T01:04:14.899845123Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d4aba1b339122bd6b4f5bce1748266d9b6102e44e4bdb4894870bd602f7bf44 pid=4259 runtime=io.containerd.runc.v2 Nov 1 01:04:14.913475 systemd[1]: Started cri-containerd-6d4aba1b339122bd6b4f5bce1748266d9b6102e44e4bdb4894870bd602f7bf44.scope. Nov 1 01:04:14.942270 env[1442]: time="2025-11-01T01:04:14.942211520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ft26f,Uid:a415738a-def1-4c21-8de7-b70fc582fcec,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d4aba1b339122bd6b4f5bce1748266d9b6102e44e4bdb4894870bd602f7bf44\"" Nov 1 01:04:14.951347 env[1442]: time="2025-11-01T01:04:14.950975723Z" level=info msg="CreateContainer within sandbox \"6d4aba1b339122bd6b4f5bce1748266d9b6102e44e4bdb4894870bd602f7bf44\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 01:04:14.979309 env[1442]: time="2025-11-01T01:04:14.979219654Z" level=info msg="CreateContainer within sandbox \"6d4aba1b339122bd6b4f5bce1748266d9b6102e44e4bdb4894870bd602f7bf44\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9\"" Nov 1 01:04:14.981508 env[1442]: time="2025-11-01T01:04:14.979865061Z" level=info msg="StartContainer for \"d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9\"" Nov 1 01:04:14.998043 systemd[1]: Started cri-containerd-d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9.scope. Nov 1 01:04:15.014945 systemd[1]: cri-containerd-d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9.scope: Deactivated successfully. Nov 1 01:04:15.062097 env[1442]: time="2025-11-01T01:04:15.062033022Z" level=info msg="shim disconnected" id=d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9 Nov 1 01:04:15.062097 env[1442]: time="2025-11-01T01:04:15.062095822Z" level=warning msg="cleaning up after shim disconnected" id=d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9 namespace=k8s.io Nov 1 01:04:15.062097 env[1442]: time="2025-11-01T01:04:15.062106922Z" level=info msg="cleaning up dead shim" Nov 1 01:04:15.071226 env[1442]: time="2025-11-01T01:04:15.071166428Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4319 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T01:04:15Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Nov 1 01:04:15.071611 env[1442]: time="2025-11-01T01:04:15.071492932Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Nov 1 01:04:15.075344 env[1442]: time="2025-11-01T01:04:15.075290376Z" level=error msg="Failed to pipe stdout of container \"d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9\"" error="reading from a closed fifo" Nov 1 01:04:15.076372 env[1442]: time="2025-11-01T01:04:15.076317588Z" level=error msg="Failed to pipe stderr of container \"d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9\"" error="reading from a closed fifo" Nov 1 01:04:15.084781 env[1442]: time="2025-11-01T01:04:15.084370182Z" level=error msg="StartContainer for \"d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Nov 1 01:04:15.084968 kubelet[2455]: E1101 01:04:15.084891 2455 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9" Nov 1 01:04:15.085185 kubelet[2455]: E1101 01:04:15.084990 2455 kuberuntime_manager.go:1449] "Unhandled Error" err="init container mount-cgroup start failed in pod cilium-ft26f_kube-system(a415738a-def1-4c21-8de7-b70fc582fcec): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" logger="UnhandledError" Nov 1 01:04:15.085185 kubelet[2455]: E1101 01:04:15.085039 2455 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ft26f" podUID="a415738a-def1-4c21-8de7-b70fc582fcec" Nov 1 01:04:15.317542 sshd[4247]: Accepted publickey for core from 10.200.16.10 port 53990 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:04:15.319442 sshd[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:04:15.324306 systemd-logind[1430]: New session 30 of user core. Nov 1 01:04:15.324402 systemd[1]: Started session-30.scope. Nov 1 01:04:15.650885 kubelet[2455]: E1101 01:04:15.649754 2455 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 01:04:15.827899 sshd[4247]: pam_unix(sshd:session): session closed for user core Nov 1 01:04:15.831459 systemd[1]: sshd@27-10.200.4.7:22-10.200.16.10:53990.service: Deactivated successfully. Nov 1 01:04:15.832357 systemd[1]: session-30.scope: Deactivated successfully. Nov 1 01:04:15.833060 systemd-logind[1430]: Session 30 logged out. Waiting for processes to exit. Nov 1 01:04:15.833917 systemd-logind[1430]: Removed session 30. Nov 1 01:04:15.928831 systemd[1]: Started sshd@28-10.200.4.7:22-10.200.16.10:53996.service. Nov 1 01:04:16.096392 env[1442]: time="2025-11-01T01:04:16.094527064Z" level=info msg="CreateContainer within sandbox \"6d4aba1b339122bd6b4f5bce1748266d9b6102e44e4bdb4894870bd602f7bf44\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Nov 1 01:04:16.124746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2650107252.mount: Deactivated successfully. Nov 1 01:04:16.141881 env[1442]: time="2025-11-01T01:04:16.141828113Z" level=info msg="CreateContainer within sandbox \"6d4aba1b339122bd6b4f5bce1748266d9b6102e44e4bdb4894870bd602f7bf44\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"bc7866486d779bbefe65d1ede95e3224b5aa71a3e18c774083186af8e53e5392\"" Nov 1 01:04:16.142617 env[1442]: time="2025-11-01T01:04:16.142580322Z" level=info msg="StartContainer for \"bc7866486d779bbefe65d1ede95e3224b5aa71a3e18c774083186af8e53e5392\"" Nov 1 01:04:16.167476 systemd[1]: Started cri-containerd-bc7866486d779bbefe65d1ede95e3224b5aa71a3e18c774083186af8e53e5392.scope. Nov 1 01:04:16.184454 systemd[1]: cri-containerd-bc7866486d779bbefe65d1ede95e3224b5aa71a3e18c774083186af8e53e5392.scope: Deactivated successfully. Nov 1 01:04:16.210226 env[1442]: time="2025-11-01T01:04:16.210162906Z" level=info msg="shim disconnected" id=bc7866486d779bbefe65d1ede95e3224b5aa71a3e18c774083186af8e53e5392 Nov 1 01:04:16.210226 env[1442]: time="2025-11-01T01:04:16.210223107Z" level=warning msg="cleaning up after shim disconnected" id=bc7866486d779bbefe65d1ede95e3224b5aa71a3e18c774083186af8e53e5392 namespace=k8s.io Nov 1 01:04:16.210574 env[1442]: time="2025-11-01T01:04:16.210271807Z" level=info msg="cleaning up dead shim" Nov 1 01:04:16.218782 env[1442]: time="2025-11-01T01:04:16.218731306Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4368 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T01:04:16Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/bc7866486d779bbefe65d1ede95e3224b5aa71a3e18c774083186af8e53e5392/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Nov 1 01:04:16.219071 env[1442]: time="2025-11-01T01:04:16.219011009Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Nov 1 01:04:16.219335 env[1442]: time="2025-11-01T01:04:16.219293212Z" level=error msg="Failed to pipe stdout of container \"bc7866486d779bbefe65d1ede95e3224b5aa71a3e18c774083186af8e53e5392\"" error="reading from a closed fifo" Nov 1 01:04:16.223445 env[1442]: time="2025-11-01T01:04:16.223400460Z" level=error msg="Failed to pipe stderr of container \"bc7866486d779bbefe65d1ede95e3224b5aa71a3e18c774083186af8e53e5392\"" error="reading from a closed fifo" Nov 1 01:04:16.228307 env[1442]: time="2025-11-01T01:04:16.228258516Z" level=error msg="StartContainer for \"bc7866486d779bbefe65d1ede95e3224b5aa71a3e18c774083186af8e53e5392\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Nov 1 01:04:16.228594 kubelet[2455]: E1101 01:04:16.228555 2455 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="bc7866486d779bbefe65d1ede95e3224b5aa71a3e18c774083186af8e53e5392" Nov 1 01:04:16.229267 kubelet[2455]: E1101 01:04:16.229031 2455 kuberuntime_manager.go:1449] "Unhandled Error" err="init container mount-cgroup start failed in pod cilium-ft26f_kube-system(a415738a-def1-4c21-8de7-b70fc582fcec): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" logger="UnhandledError" Nov 1 01:04:16.229267 kubelet[2455]: E1101 01:04:16.229095 2455 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ft26f" podUID="a415738a-def1-4c21-8de7-b70fc582fcec" Nov 1 01:04:16.522522 sshd[4341]: Accepted publickey for core from 10.200.16.10 port 53996 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:04:16.524265 sshd[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:04:16.529307 systemd-logind[1430]: New session 31 of user core. Nov 1 01:04:16.529637 systemd[1]: Started session-31.scope. Nov 1 01:04:16.693708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc7866486d779bbefe65d1ede95e3224b5aa71a3e18c774083186af8e53e5392-rootfs.mount: Deactivated successfully. Nov 1 01:04:17.094906 kubelet[2455]: I1101 01:04:17.094872 2455 scope.go:117] "RemoveContainer" containerID="d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9" Nov 1 01:04:17.095582 env[1442]: time="2025-11-01T01:04:17.095543781Z" level=info msg="StopPodSandbox for \"6d4aba1b339122bd6b4f5bce1748266d9b6102e44e4bdb4894870bd602f7bf44\"" Nov 1 01:04:17.096078 env[1442]: time="2025-11-01T01:04:17.096027686Z" level=info msg="Container to stop \"d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 01:04:17.096246 env[1442]: time="2025-11-01T01:04:17.096209188Z" level=info msg="Container to stop \"bc7866486d779bbefe65d1ede95e3224b5aa71a3e18c774083186af8e53e5392\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 01:04:17.102823 env[1442]: time="2025-11-01T01:04:17.096727794Z" level=info msg="RemoveContainer for \"d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9\"" Nov 1 01:04:17.100882 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d4aba1b339122bd6b4f5bce1748266d9b6102e44e4bdb4894870bd602f7bf44-shm.mount: Deactivated successfully. Nov 1 01:04:17.106760 env[1442]: time="2025-11-01T01:04:17.106720210Z" level=info msg="RemoveContainer for \"d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9\" returns successfully" Nov 1 01:04:17.115537 systemd[1]: cri-containerd-6d4aba1b339122bd6b4f5bce1748266d9b6102e44e4bdb4894870bd602f7bf44.scope: Deactivated successfully. Nov 1 01:04:17.143089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d4aba1b339122bd6b4f5bce1748266d9b6102e44e4bdb4894870bd602f7bf44-rootfs.mount: Deactivated successfully. Nov 1 01:04:17.163227 env[1442]: time="2025-11-01T01:04:17.163176662Z" level=info msg="shim disconnected" id=6d4aba1b339122bd6b4f5bce1748266d9b6102e44e4bdb4894870bd602f7bf44 Nov 1 01:04:17.163493 env[1442]: time="2025-11-01T01:04:17.163226963Z" level=warning msg="cleaning up after shim disconnected" id=6d4aba1b339122bd6b4f5bce1748266d9b6102e44e4bdb4894870bd602f7bf44 namespace=k8s.io Nov 1 01:04:17.163493 env[1442]: time="2025-11-01T01:04:17.163261363Z" level=info msg="cleaning up dead shim" Nov 1 01:04:17.171527 env[1442]: time="2025-11-01T01:04:17.171482258Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4406 runtime=io.containerd.runc.v2\n" Nov 1 01:04:17.171860 env[1442]: time="2025-11-01T01:04:17.171825162Z" level=info msg="TearDown network for sandbox \"6d4aba1b339122bd6b4f5bce1748266d9b6102e44e4bdb4894870bd602f7bf44\" successfully" Nov 1 01:04:17.171860 env[1442]: time="2025-11-01T01:04:17.171857462Z" level=info msg="StopPodSandbox for \"6d4aba1b339122bd6b4f5bce1748266d9b6102e44e4bdb4894870bd602f7bf44\" returns successfully" Nov 1 01:04:17.204495 kubelet[2455]: I1101 01:04:17.204444 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-etc-cni-netd\") pod \"a415738a-def1-4c21-8de7-b70fc582fcec\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " Nov 1 01:04:17.204780 kubelet[2455]: I1101 01:04:17.204572 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a415738a-def1-4c21-8de7-b70fc582fcec" (UID: "a415738a-def1-4c21-8de7-b70fc582fcec"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:17.204873 kubelet[2455]: I1101 01:04:17.204757 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a415738a-def1-4c21-8de7-b70fc582fcec-clustermesh-secrets\") pod \"a415738a-def1-4c21-8de7-b70fc582fcec\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " Nov 1 01:04:17.204873 kubelet[2455]: I1101 01:04:17.204815 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-host-proc-sys-net\") pod \"a415738a-def1-4c21-8de7-b70fc582fcec\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " Nov 1 01:04:17.204873 kubelet[2455]: I1101 01:04:17.204856 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-bpf-maps\") pod \"a415738a-def1-4c21-8de7-b70fc582fcec\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " Nov 1 01:04:17.205036 kubelet[2455]: I1101 01:04:17.204875 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-cni-path\") pod \"a415738a-def1-4c21-8de7-b70fc582fcec\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " Nov 1 01:04:17.205036 kubelet[2455]: I1101 01:04:17.204949 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a415738a-def1-4c21-8de7-b70fc582fcec" (UID: "a415738a-def1-4c21-8de7-b70fc582fcec"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:17.205036 kubelet[2455]: I1101 01:04:17.204975 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a415738a-def1-4c21-8de7-b70fc582fcec" (UID: "a415738a-def1-4c21-8de7-b70fc582fcec"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:17.205036 kubelet[2455]: I1101 01:04:17.205003 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-cni-path" (OuterVolumeSpecName: "cni-path") pod "a415738a-def1-4c21-8de7-b70fc582fcec" (UID: "a415738a-def1-4c21-8de7-b70fc582fcec"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:17.205036 kubelet[2455]: I1101 01:04:17.204905 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a415738a-def1-4c21-8de7-b70fc582fcec-cilium-ipsec-secrets\") pod \"a415738a-def1-4c21-8de7-b70fc582fcec\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " Nov 1 01:04:17.205278 kubelet[2455]: I1101 01:04:17.205040 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-cilium-cgroup\") pod \"a415738a-def1-4c21-8de7-b70fc582fcec\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " Nov 1 01:04:17.205391 kubelet[2455]: I1101 01:04:17.205366 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96dbd\" (UniqueName: \"kubernetes.io/projected/a415738a-def1-4c21-8de7-b70fc582fcec-kube-api-access-96dbd\") pod \"a415738a-def1-4c21-8de7-b70fc582fcec\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " Nov 1 01:04:17.205448 kubelet[2455]: I1101 01:04:17.205412 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-lib-modules\") pod \"a415738a-def1-4c21-8de7-b70fc582fcec\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " Nov 1 01:04:17.205501 kubelet[2455]: I1101 01:04:17.205450 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-cilium-run\") pod \"a415738a-def1-4c21-8de7-b70fc582fcec\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " Nov 1 01:04:17.205501 kubelet[2455]: I1101 01:04:17.205480 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-xtables-lock\") pod \"a415738a-def1-4c21-8de7-b70fc582fcec\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " Nov 1 01:04:17.205588 kubelet[2455]: I1101 01:04:17.205502 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-hostproc\") pod \"a415738a-def1-4c21-8de7-b70fc582fcec\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " Nov 1 01:04:17.205679 kubelet[2455]: I1101 01:04:17.205653 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-host-proc-sys-kernel\") pod \"a415738a-def1-4c21-8de7-b70fc582fcec\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " Nov 1 01:04:17.205738 kubelet[2455]: I1101 01:04:17.205693 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a415738a-def1-4c21-8de7-b70fc582fcec-hubble-tls\") pod \"a415738a-def1-4c21-8de7-b70fc582fcec\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " Nov 1 01:04:17.205738 kubelet[2455]: I1101 01:04:17.205732 2455 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a415738a-def1-4c21-8de7-b70fc582fcec-cilium-config-path\") pod \"a415738a-def1-4c21-8de7-b70fc582fcec\" (UID: \"a415738a-def1-4c21-8de7-b70fc582fcec\") " Nov 1 01:04:17.205828 kubelet[2455]: I1101 01:04:17.205800 2455 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-host-proc-sys-net\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:17.205828 kubelet[2455]: I1101 01:04:17.205816 2455 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-bpf-maps\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:17.205913 kubelet[2455]: I1101 01:04:17.205828 2455 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-cni-path\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:17.205913 kubelet[2455]: I1101 01:04:17.205839 2455 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-etc-cni-netd\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:17.211764 systemd[1]: var-lib-kubelet-pods-a415738a\x2ddef1\x2d4c21\x2d8de7\x2db70fc582fcec-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Nov 1 01:04:17.213418 kubelet[2455]: I1101 01:04:17.213370 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a415738a-def1-4c21-8de7-b70fc582fcec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a415738a-def1-4c21-8de7-b70fc582fcec" (UID: "a415738a-def1-4c21-8de7-b70fc582fcec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 01:04:17.214076 kubelet[2455]: I1101 01:04:17.213571 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a415738a-def1-4c21-8de7-b70fc582fcec" (UID: "a415738a-def1-4c21-8de7-b70fc582fcec"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:17.214592 kubelet[2455]: I1101 01:04:17.214215 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a415738a-def1-4c21-8de7-b70fc582fcec" (UID: "a415738a-def1-4c21-8de7-b70fc582fcec"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:17.214722 kubelet[2455]: I1101 01:04:17.214302 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a415738a-def1-4c21-8de7-b70fc582fcec" (UID: "a415738a-def1-4c21-8de7-b70fc582fcec"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:17.214804 kubelet[2455]: I1101 01:04:17.214320 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a415738a-def1-4c21-8de7-b70fc582fcec" (UID: "a415738a-def1-4c21-8de7-b70fc582fcec"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:17.214876 kubelet[2455]: I1101 01:04:17.214316 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a415738a-def1-4c21-8de7-b70fc582fcec-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a415738a-def1-4c21-8de7-b70fc582fcec" (UID: "a415738a-def1-4c21-8de7-b70fc582fcec"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 01:04:17.214950 kubelet[2455]: I1101 01:04:17.214333 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-hostproc" (OuterVolumeSpecName: "hostproc") pod "a415738a-def1-4c21-8de7-b70fc582fcec" (UID: "a415738a-def1-4c21-8de7-b70fc582fcec"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:17.215025 kubelet[2455]: I1101 01:04:17.214339 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a415738a-def1-4c21-8de7-b70fc582fcec" (UID: "a415738a-def1-4c21-8de7-b70fc582fcec"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:17.218128 kubelet[2455]: I1101 01:04:17.218099 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a415738a-def1-4c21-8de7-b70fc582fcec-kube-api-access-96dbd" (OuterVolumeSpecName: "kube-api-access-96dbd") pod "a415738a-def1-4c21-8de7-b70fc582fcec" (UID: "a415738a-def1-4c21-8de7-b70fc582fcec"). InnerVolumeSpecName "kube-api-access-96dbd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:04:17.220106 systemd[1]: var-lib-kubelet-pods-a415738a\x2ddef1\x2d4c21\x2d8de7\x2db70fc582fcec-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 01:04:17.220261 systemd[1]: var-lib-kubelet-pods-a415738a\x2ddef1\x2d4c21\x2d8de7\x2db70fc582fcec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d96dbd.mount: Deactivated successfully. Nov 1 01:04:17.224190 kubelet[2455]: I1101 01:04:17.224161 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a415738a-def1-4c21-8de7-b70fc582fcec-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a415738a-def1-4c21-8de7-b70fc582fcec" (UID: "a415738a-def1-4c21-8de7-b70fc582fcec"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 01:04:17.224305 kubelet[2455]: I1101 01:04:17.224169 2455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a415738a-def1-4c21-8de7-b70fc582fcec-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a415738a-def1-4c21-8de7-b70fc582fcec" (UID: "a415738a-def1-4c21-8de7-b70fc582fcec"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:04:17.306499 kubelet[2455]: I1101 01:04:17.306446 2455 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a415738a-def1-4c21-8de7-b70fc582fcec-clustermesh-secrets\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:17.306499 kubelet[2455]: I1101 01:04:17.306490 2455 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a415738a-def1-4c21-8de7-b70fc582fcec-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:17.306499 kubelet[2455]: I1101 01:04:17.306505 2455 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-cilium-cgroup\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:17.306790 kubelet[2455]: I1101 01:04:17.306519 2455 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-96dbd\" (UniqueName: \"kubernetes.io/projected/a415738a-def1-4c21-8de7-b70fc582fcec-kube-api-access-96dbd\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:17.306790 kubelet[2455]: I1101 01:04:17.306535 2455 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-lib-modules\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:17.306790 kubelet[2455]: I1101 01:04:17.306547 2455 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-cilium-run\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:17.306790 kubelet[2455]: I1101 01:04:17.306560 2455 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-xtables-lock\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:17.306790 kubelet[2455]: I1101 01:04:17.306571 2455 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-hostproc\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:17.306790 kubelet[2455]: I1101 01:04:17.306585 2455 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a415738a-def1-4c21-8de7-b70fc582fcec-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:17.306790 kubelet[2455]: I1101 01:04:17.306600 2455 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a415738a-def1-4c21-8de7-b70fc582fcec-hubble-tls\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:17.306790 kubelet[2455]: I1101 01:04:17.306613 2455 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a415738a-def1-4c21-8de7-b70fc582fcec-cilium-config-path\") on node \"ci-3510.3.8-n-e458e05b0a\" DevicePath \"\"" Nov 1 01:04:17.507862 systemd[1]: Removed slice kubepods-burstable-poda415738a_def1_4c21_8de7_b70fc582fcec.slice. Nov 1 01:04:17.693622 systemd[1]: var-lib-kubelet-pods-a415738a\x2ddef1\x2d4c21\x2d8de7\x2db70fc582fcec-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 01:04:18.098082 kubelet[2455]: I1101 01:04:18.098036 2455 scope.go:117] "RemoveContainer" containerID="bc7866486d779bbefe65d1ede95e3224b5aa71a3e18c774083186af8e53e5392" Nov 1 01:04:18.101565 env[1442]: time="2025-11-01T01:04:18.101194095Z" level=info msg="RemoveContainer for \"bc7866486d779bbefe65d1ede95e3224b5aa71a3e18c774083186af8e53e5392\"" Nov 1 01:04:18.118797 env[1442]: time="2025-11-01T01:04:18.118744296Z" level=info msg="RemoveContainer for \"bc7866486d779bbefe65d1ede95e3224b5aa71a3e18c774083186af8e53e5392\" returns successfully" Nov 1 01:04:18.161131 systemd[1]: Created slice kubepods-burstable-pod4c28e2ce_644b_420c_b7af_00c74008a172.slice. Nov 1 01:04:18.171314 kubelet[2455]: W1101 01:04:18.171246 2455 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda415738a_def1_4c21_8de7_b70fc582fcec.slice/cri-containerd-d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9.scope WatchSource:0}: container "d503643888b67b0434d165e31c37f5cf88534d499933ec30d08f7c62e094e6f9" in namespace "k8s.io": not found Nov 1 01:04:18.213298 kubelet[2455]: I1101 01:04:18.213256 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c28e2ce-644b-420c-b7af-00c74008a172-xtables-lock\") pod \"cilium-qh7jq\" (UID: \"4c28e2ce-644b-420c-b7af-00c74008a172\") " pod="kube-system/cilium-qh7jq" Nov 1 01:04:18.213506 kubelet[2455]: I1101 01:04:18.213307 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c28e2ce-644b-420c-b7af-00c74008a172-cilium-config-path\") pod \"cilium-qh7jq\" (UID: \"4c28e2ce-644b-420c-b7af-00c74008a172\") " pod="kube-system/cilium-qh7jq" Nov 1 01:04:18.213506 kubelet[2455]: I1101 01:04:18.213328 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c28e2ce-644b-420c-b7af-00c74008a172-host-proc-sys-net\") pod \"cilium-qh7jq\" (UID: \"4c28e2ce-644b-420c-b7af-00c74008a172\") " pod="kube-system/cilium-qh7jq" Nov 1 01:04:18.213506 kubelet[2455]: I1101 01:04:18.213347 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c28e2ce-644b-420c-b7af-00c74008a172-hostproc\") pod \"cilium-qh7jq\" (UID: \"4c28e2ce-644b-420c-b7af-00c74008a172\") " pod="kube-system/cilium-qh7jq" Nov 1 01:04:18.213506 kubelet[2455]: I1101 01:04:18.213368 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c28e2ce-644b-420c-b7af-00c74008a172-clustermesh-secrets\") pod \"cilium-qh7jq\" (UID: \"4c28e2ce-644b-420c-b7af-00c74008a172\") " pod="kube-system/cilium-qh7jq" Nov 1 01:04:18.213506 kubelet[2455]: I1101 01:04:18.213385 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4c28e2ce-644b-420c-b7af-00c74008a172-cilium-ipsec-secrets\") pod \"cilium-qh7jq\" (UID: \"4c28e2ce-644b-420c-b7af-00c74008a172\") " pod="kube-system/cilium-qh7jq" Nov 1 01:04:18.213737 kubelet[2455]: I1101 01:04:18.213403 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c28e2ce-644b-420c-b7af-00c74008a172-host-proc-sys-kernel\") pod \"cilium-qh7jq\" (UID: \"4c28e2ce-644b-420c-b7af-00c74008a172\") " pod="kube-system/cilium-qh7jq" Nov 1 01:04:18.213737 kubelet[2455]: I1101 01:04:18.213421 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c28e2ce-644b-420c-b7af-00c74008a172-etc-cni-netd\") pod \"cilium-qh7jq\" (UID: \"4c28e2ce-644b-420c-b7af-00c74008a172\") " pod="kube-system/cilium-qh7jq" Nov 1 01:04:18.213737 kubelet[2455]: I1101 01:04:18.213439 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c28e2ce-644b-420c-b7af-00c74008a172-hubble-tls\") pod \"cilium-qh7jq\" (UID: \"4c28e2ce-644b-420c-b7af-00c74008a172\") " pod="kube-system/cilium-qh7jq" Nov 1 01:04:18.213737 kubelet[2455]: I1101 01:04:18.213459 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c28e2ce-644b-420c-b7af-00c74008a172-cilium-cgroup\") pod \"cilium-qh7jq\" (UID: \"4c28e2ce-644b-420c-b7af-00c74008a172\") " pod="kube-system/cilium-qh7jq" Nov 1 01:04:18.213737 kubelet[2455]: I1101 01:04:18.213496 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c28e2ce-644b-420c-b7af-00c74008a172-cilium-run\") pod \"cilium-qh7jq\" (UID: \"4c28e2ce-644b-420c-b7af-00c74008a172\") " pod="kube-system/cilium-qh7jq" Nov 1 01:04:18.213737 kubelet[2455]: I1101 01:04:18.213521 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c28e2ce-644b-420c-b7af-00c74008a172-bpf-maps\") pod \"cilium-qh7jq\" (UID: \"4c28e2ce-644b-420c-b7af-00c74008a172\") " pod="kube-system/cilium-qh7jq" Nov 1 01:04:18.214057 kubelet[2455]: I1101 01:04:18.213544 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c28e2ce-644b-420c-b7af-00c74008a172-cni-path\") pod \"cilium-qh7jq\" (UID: \"4c28e2ce-644b-420c-b7af-00c74008a172\") " pod="kube-system/cilium-qh7jq" Nov 1 01:04:18.214057 kubelet[2455]: I1101 01:04:18.213567 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c28e2ce-644b-420c-b7af-00c74008a172-lib-modules\") pod \"cilium-qh7jq\" (UID: \"4c28e2ce-644b-420c-b7af-00c74008a172\") " pod="kube-system/cilium-qh7jq" Nov 1 01:04:18.214057 kubelet[2455]: I1101 01:04:18.213588 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnbqv\" (UniqueName: \"kubernetes.io/projected/4c28e2ce-644b-420c-b7af-00c74008a172-kube-api-access-dnbqv\") pod \"cilium-qh7jq\" (UID: \"4c28e2ce-644b-420c-b7af-00c74008a172\") " pod="kube-system/cilium-qh7jq" Nov 1 01:04:18.488485 env[1442]: time="2025-11-01T01:04:18.488420347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qh7jq,Uid:4c28e2ce-644b-420c-b7af-00c74008a172,Namespace:kube-system,Attempt:0,}" Nov 1 01:04:18.537662 env[1442]: time="2025-11-01T01:04:18.537583212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:04:18.537857 env[1442]: time="2025-11-01T01:04:18.537625813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:04:18.537857 env[1442]: time="2025-11-01T01:04:18.537640213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:04:18.537998 env[1442]: time="2025-11-01T01:04:18.537884716Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3de327628596bb93d20ddc1065c77bfcffe439f60d0f1e1d0e2481265624c9f6 pid=4435 runtime=io.containerd.runc.v2 Nov 1 01:04:18.552246 systemd[1]: Started cri-containerd-3de327628596bb93d20ddc1065c77bfcffe439f60d0f1e1d0e2481265624c9f6.scope. Nov 1 01:04:18.578606 env[1442]: time="2025-11-01T01:04:18.578528083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qh7jq,Uid:4c28e2ce-644b-420c-b7af-00c74008a172,Namespace:kube-system,Attempt:0,} returns sandbox id \"3de327628596bb93d20ddc1065c77bfcffe439f60d0f1e1d0e2481265624c9f6\"" Nov 1 01:04:18.587415 env[1442]: time="2025-11-01T01:04:18.586911579Z" level=info msg="CreateContainer within sandbox \"3de327628596bb93d20ddc1065c77bfcffe439f60d0f1e1d0e2481265624c9f6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 01:04:18.613964 env[1442]: time="2025-11-01T01:04:18.613899090Z" level=info msg="CreateContainer within sandbox \"3de327628596bb93d20ddc1065c77bfcffe439f60d0f1e1d0e2481265624c9f6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1af50d4068a2ef33f269d79d7ad060ea38bc9c9990d429c38a54cca8e10050e7\"" Nov 1 01:04:18.616302 env[1442]: time="2025-11-01T01:04:18.614764300Z" level=info msg="StartContainer for \"1af50d4068a2ef33f269d79d7ad060ea38bc9c9990d429c38a54cca8e10050e7\"" Nov 1 01:04:18.633000 systemd[1]: Started cri-containerd-1af50d4068a2ef33f269d79d7ad060ea38bc9c9990d429c38a54cca8e10050e7.scope. Nov 1 01:04:18.665522 env[1442]: time="2025-11-01T01:04:18.665463983Z" level=info msg="StartContainer for \"1af50d4068a2ef33f269d79d7ad060ea38bc9c9990d429c38a54cca8e10050e7\" returns successfully" Nov 1 01:04:18.671953 systemd[1]: cri-containerd-1af50d4068a2ef33f269d79d7ad060ea38bc9c9990d429c38a54cca8e10050e7.scope: Deactivated successfully. Nov 1 01:04:18.707173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1af50d4068a2ef33f269d79d7ad060ea38bc9c9990d429c38a54cca8e10050e7-rootfs.mount: Deactivated successfully. Nov 1 01:04:18.752917 env[1442]: time="2025-11-01T01:04:18.752743686Z" level=info msg="shim disconnected" id=1af50d4068a2ef33f269d79d7ad060ea38bc9c9990d429c38a54cca8e10050e7 Nov 1 01:04:18.752917 env[1442]: time="2025-11-01T01:04:18.752816287Z" level=warning msg="cleaning up after shim disconnected" id=1af50d4068a2ef33f269d79d7ad060ea38bc9c9990d429c38a54cca8e10050e7 namespace=k8s.io Nov 1 01:04:18.752917 env[1442]: time="2025-11-01T01:04:18.752831387Z" level=info msg="cleaning up dead shim" Nov 1 01:04:18.762741 env[1442]: time="2025-11-01T01:04:18.762690701Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4515 runtime=io.containerd.runc.v2\n" Nov 1 01:04:19.110357 env[1442]: time="2025-11-01T01:04:19.110204490Z" level=info msg="CreateContainer within sandbox \"3de327628596bb93d20ddc1065c77bfcffe439f60d0f1e1d0e2481265624c9f6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 01:04:19.148469 env[1442]: time="2025-11-01T01:04:19.148411728Z" level=info msg="CreateContainer within sandbox \"3de327628596bb93d20ddc1065c77bfcffe439f60d0f1e1d0e2481265624c9f6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"176a9db31a8b65d39a4ae612ca00fd2910984a43ff821b8af4af7d7c409470bb\"" Nov 1 01:04:19.150698 env[1442]: time="2025-11-01T01:04:19.149408939Z" level=info msg="StartContainer for \"176a9db31a8b65d39a4ae612ca00fd2910984a43ff821b8af4af7d7c409470bb\"" Nov 1 01:04:19.169547 systemd[1]: Started cri-containerd-176a9db31a8b65d39a4ae612ca00fd2910984a43ff821b8af4af7d7c409470bb.scope. Nov 1 01:04:19.206095 env[1442]: time="2025-11-01T01:04:19.206033287Z" level=info msg="StartContainer for \"176a9db31a8b65d39a4ae612ca00fd2910984a43ff821b8af4af7d7c409470bb\" returns successfully" Nov 1 01:04:19.211660 systemd[1]: cri-containerd-176a9db31a8b65d39a4ae612ca00fd2910984a43ff821b8af4af7d7c409470bb.scope: Deactivated successfully. Nov 1 01:04:19.243874 env[1442]: time="2025-11-01T01:04:19.243821620Z" level=info msg="shim disconnected" id=176a9db31a8b65d39a4ae612ca00fd2910984a43ff821b8af4af7d7c409470bb Nov 1 01:04:19.243874 env[1442]: time="2025-11-01T01:04:19.243874220Z" level=warning msg="cleaning up after shim disconnected" id=176a9db31a8b65d39a4ae612ca00fd2910984a43ff821b8af4af7d7c409470bb namespace=k8s.io Nov 1 01:04:19.243874 env[1442]: time="2025-11-01T01:04:19.243885120Z" level=info msg="cleaning up dead shim" Nov 1 01:04:19.252535 env[1442]: time="2025-11-01T01:04:19.252488519Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4577 runtime=io.containerd.runc.v2\n" Nov 1 01:04:19.504447 kubelet[2455]: I1101 01:04:19.504408 2455 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a415738a-def1-4c21-8de7-b70fc582fcec" path="/var/lib/kubelet/pods/a415738a-def1-4c21-8de7-b70fc582fcec/volumes" Nov 1 01:04:20.117987 env[1442]: time="2025-11-01T01:04:20.117934216Z" level=info msg="CreateContainer within sandbox \"3de327628596bb93d20ddc1065c77bfcffe439f60d0f1e1d0e2481265624c9f6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 01:04:20.150311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1640828068.mount: Deactivated successfully. Nov 1 01:04:20.166742 env[1442]: time="2025-11-01T01:04:20.166680771Z" level=info msg="CreateContainer within sandbox \"3de327628596bb93d20ddc1065c77bfcffe439f60d0f1e1d0e2481265624c9f6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d2530c8e4e518edac1c7b5fcb84511d4b4d51f47bc6198d99bc4f4cd7f2d7b4b\"" Nov 1 01:04:20.168149 env[1442]: time="2025-11-01T01:04:20.167637982Z" level=info msg="StartContainer for \"d2530c8e4e518edac1c7b5fcb84511d4b4d51f47bc6198d99bc4f4cd7f2d7b4b\"" Nov 1 01:04:20.191830 systemd[1]: Started cri-containerd-d2530c8e4e518edac1c7b5fcb84511d4b4d51f47bc6198d99bc4f4cd7f2d7b4b.scope. Nov 1 01:04:20.227896 systemd[1]: cri-containerd-d2530c8e4e518edac1c7b5fcb84511d4b4d51f47bc6198d99bc4f4cd7f2d7b4b.scope: Deactivated successfully. Nov 1 01:04:20.233457 env[1442]: time="2025-11-01T01:04:20.233406431Z" level=info msg="StartContainer for \"d2530c8e4e518edac1c7b5fcb84511d4b4d51f47bc6198d99bc4f4cd7f2d7b4b\" returns successfully" Nov 1 01:04:20.264125 env[1442]: time="2025-11-01T01:04:20.264073180Z" level=info msg="shim disconnected" id=d2530c8e4e518edac1c7b5fcb84511d4b4d51f47bc6198d99bc4f4cd7f2d7b4b Nov 1 01:04:20.264125 env[1442]: time="2025-11-01T01:04:20.264124681Z" level=warning msg="cleaning up after shim disconnected" id=d2530c8e4e518edac1c7b5fcb84511d4b4d51f47bc6198d99bc4f4cd7f2d7b4b namespace=k8s.io Nov 1 01:04:20.264125 env[1442]: time="2025-11-01T01:04:20.264135581Z" level=info msg="cleaning up dead shim" Nov 1 01:04:20.273002 env[1442]: time="2025-11-01T01:04:20.272950582Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4636 runtime=io.containerd.runc.v2\n" Nov 1 01:04:20.364699 kubelet[2455]: I1101 01:04:20.364446 2455 setters.go:543] "Node became not ready" node="ci-3510.3.8-n-e458e05b0a" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T01:04:20Z","lastTransitionTime":"2025-11-01T01:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 01:04:20.650985 kubelet[2455]: E1101 01:04:20.650927 2455 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 01:04:20.693726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2530c8e4e518edac1c7b5fcb84511d4b4d51f47bc6198d99bc4f4cd7f2d7b4b-rootfs.mount: Deactivated successfully. Nov 1 01:04:21.120182 env[1442]: time="2025-11-01T01:04:21.120127824Z" level=info msg="CreateContainer within sandbox \"3de327628596bb93d20ddc1065c77bfcffe439f60d0f1e1d0e2481265624c9f6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 01:04:21.151256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067217516.mount: Deactivated successfully. Nov 1 01:04:21.172525 env[1442]: time="2025-11-01T01:04:21.172461917Z" level=info msg="CreateContainer within sandbox \"3de327628596bb93d20ddc1065c77bfcffe439f60d0f1e1d0e2481265624c9f6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f6049bac8b059739eef329a05349e970368bf6c40c060f632ccbe7c0d67efc14\"" Nov 1 01:04:21.173481 env[1442]: time="2025-11-01T01:04:21.173444329Z" level=info msg="StartContainer for \"f6049bac8b059739eef329a05349e970368bf6c40c060f632ccbe7c0d67efc14\"" Nov 1 01:04:21.196159 systemd[1]: Started cri-containerd-f6049bac8b059739eef329a05349e970368bf6c40c060f632ccbe7c0d67efc14.scope. Nov 1 01:04:21.225830 systemd[1]: cri-containerd-f6049bac8b059739eef329a05349e970368bf6c40c060f632ccbe7c0d67efc14.scope: Deactivated successfully. Nov 1 01:04:21.231598 env[1442]: time="2025-11-01T01:04:21.231536987Z" level=info msg="StartContainer for \"f6049bac8b059739eef329a05349e970368bf6c40c060f632ccbe7c0d67efc14\" returns successfully" Nov 1 01:04:21.263987 env[1442]: time="2025-11-01T01:04:21.263930154Z" level=info msg="shim disconnected" id=f6049bac8b059739eef329a05349e970368bf6c40c060f632ccbe7c0d67efc14 Nov 1 01:04:21.263987 env[1442]: time="2025-11-01T01:04:21.263984655Z" level=warning msg="cleaning up after shim disconnected" id=f6049bac8b059739eef329a05349e970368bf6c40c060f632ccbe7c0d67efc14 namespace=k8s.io Nov 1 01:04:21.264355 env[1442]: time="2025-11-01T01:04:21.263999055Z" level=info msg="cleaning up dead shim" Nov 1 01:04:21.271988 env[1442]: time="2025-11-01T01:04:21.271943645Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4690 runtime=io.containerd.runc.v2\n" Nov 1 01:04:21.694023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6049bac8b059739eef329a05349e970368bf6c40c060f632ccbe7c0d67efc14-rootfs.mount: Deactivated successfully. Nov 1 01:04:22.127270 env[1442]: time="2025-11-01T01:04:22.125203211Z" level=info msg="CreateContainer within sandbox \"3de327628596bb93d20ddc1065c77bfcffe439f60d0f1e1d0e2481265624c9f6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 01:04:22.163132 env[1442]: time="2025-11-01T01:04:22.163074239Z" level=info msg="CreateContainer within sandbox \"3de327628596bb93d20ddc1065c77bfcffe439f60d0f1e1d0e2481265624c9f6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e55450b82fe5e82e3da4d4b3cec732319e82727c7a22b9667f837d5e0cc1149e\"" Nov 1 01:04:22.164033 env[1442]: time="2025-11-01T01:04:22.164000649Z" level=info msg="StartContainer for \"e55450b82fe5e82e3da4d4b3cec732319e82727c7a22b9667f837d5e0cc1149e\"" Nov 1 01:04:22.201566 systemd[1]: Started cri-containerd-e55450b82fe5e82e3da4d4b3cec732319e82727c7a22b9667f837d5e0cc1149e.scope. Nov 1 01:04:22.249363 env[1442]: time="2025-11-01T01:04:22.249309212Z" level=info msg="StartContainer for \"e55450b82fe5e82e3da4d4b3cec732319e82727c7a22b9667f837d5e0cc1149e\" returns successfully" Nov 1 01:04:22.657285 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 1 01:04:25.091346 systemd[1]: run-containerd-runc-k8s.io-e55450b82fe5e82e3da4d4b3cec732319e82727c7a22b9667f837d5e0cc1149e-runc.07TI3I.mount: Deactivated successfully. Nov 1 01:04:25.444474 systemd-networkd[1587]: lxc_health: Link UP Nov 1 01:04:25.475358 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 01:04:25.475631 systemd-networkd[1587]: lxc_health: Gained carrier Nov 1 01:04:26.516622 kubelet[2455]: I1101 01:04:26.516545 2455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qh7jq" podStartSLOduration=8.516525753 podStartE2EDuration="8.516525753s" podCreationTimestamp="2025-11-01 01:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:04:23.142745886 +0000 UTC m=+227.835184130" watchObservedRunningTime="2025-11-01 01:04:26.516525753 +0000 UTC m=+231.208963997" Nov 1 01:04:27.070514 systemd-networkd[1587]: lxc_health: Gained IPv6LL Nov 1 01:04:31.760558 sshd[4341]: pam_unix(sshd:session): session closed for user core Nov 1 01:04:31.763858 systemd[1]: sshd@28-10.200.4.7:22-10.200.16.10:53996.service: Deactivated successfully. Nov 1 01:04:31.765437 systemd[1]: session-31.scope: Deactivated successfully. Nov 1 01:04:31.765459 systemd-logind[1430]: Session 31 logged out. Waiting for processes to exit. Nov 1 01:04:31.766639 systemd-logind[1430]: Removed session 31.